
Artificial Intelligence is incredible — and it's going to screw us over. Right now, schools are following the same tired “give everyone the tool” model they used for Word and Excel. Teachers are left to fumble through AI prompting with inconsistent results, bad training, and no collaboration. Eventually, they'll appoint a gatekeeper — someone who's supposed to “help” — and replicate that failure in every school, across every state. Institutionalised inefficiency, at scale. But AI doesn't work best like that. In two hours, I built a code-driven AI pipeline that produced an 18-page, curriculum-aligned Year 9 Vikings lesson plan — without writing a single line of code myself. Teachers didn't need to prompt it. Developers barely needed to touch it. That's the real AI model: automate the work entirely, not just give people shinier shovels. The problem? That model makes a lot of people redundant. AI's entire point is to remove human labour while keeping the product — and the profit. Which is why it's both the most amazing tool I've ever used… and the reason we're all doomed.
Created: 10/08/2025 Updated: Never

Think your knowledge work is immune to automation? Brace yourself, as AI is gradually making inroads into white-collar professions, shattering the illusion of job security. The question is, will you adapt and prepare, or be caught off-guard?
Created: 30/07/2025 Updated: Never

Forget the “AI is unsafe” angle. This breach traces back to a long-forgotten test admin account from 2019, no MFA or SSO, broken access controls (IDOR), over-permissive data access, and absent monitoring. Paradox touts ISO 27001 and SOC 2 while running a threadbare security function, and McDonald's signed off without enforcing supplier governance. The real story isn't a clever chatbot gone rogue—it's what happens when you outsource risk and treat security like paperwork.
Created: 16/07/2025 Updated: Never

Forget the webinars, the guru hacks, and the overpriced “prompt engineering” courses. Getting good results from LLMs like ChatGPT or Claude isn't magic - it's about clarity, context, and a bit of common sense. Whether you're trying to generate a report, cheat on your homework, or just look smart in a meeting - this post will help you write better prompts without falling for the BS.
Created: 30/06/2025 Updated: Never

AI systems learn from human data, which means bias and variance inevitably seep in. Over 30,000 tests using ChatGPT to grade student papers revealed clear bias linked to student name, gender, and race categories. For example, female students with Aboriginal Australian or White Australian names scored higher on average, while certain male names scored lower. The results showed distinct clusters in scoring patterns based on these variables, exposing underlying biases in the model’s output. This raises important questions about how to fairly deploy AI in educational assessment and what steps can be taken to mitigate these biases. Further testing with different AI models is underway to better understand and address the problem.
Created: 26/06/2025 Updated: Never

Calls for “ethical AI” are everywhere—but what we really want is AI that aligns with our ethics, not someone else’s. In this provocative essay, David Cheal argues that “ethical AI” is a myth, shaped less by morality and more by legal, commercial, and cultural pressures. From international censorship to marketing carcinogens, he illustrates how AI ethics are inconsistently applied, often hypocritical, and ultimately reflect the worldview of whoever controls the model. The result? You don’t want ethical AI. You want AI that agrees with you.
Created: 19/06/2025 Updated: Never

Calls to professionalise cybersecurity in Australia often miss the deeper issue: you can't have professional people in a system that isn't professional. This post argues that unless cybersecurity standards are backed by legislation—and enforced independently of employers—they're just marketing fluff. Real professions have legal backing, independent standards, and consequences for misconduct. Without this, cybersecurity roles remain beholden to commercial interests, powerless to enforce best practices, and vulnerable to retaliation for speaking up. Until we address the structural and economic incentives, cybersecurity professionalism will remain a title, not a guarantee.
Created: 19/06/2025 Updated: Never

Cybersecurity often fails not due to lack of technology, but because people resist even mild inconvenience. While it's easy to criticise the industry, the most impactful improvement anyone can make is accepting that good security is inherently annoying. Use password managers, enable MFA, set complex passwords, and stop recycling credentials. Tech teams and execs must lead by example—secure your own accounts, then help your friends and family do the same. Security starts with personal responsibility, not policies.
Created: 19/06/2025 Updated: Never

If you cant afford it, dont do it
Created: 19/06/2025 Updated: Never

Emergency Cybersecurity for those at risk of Domestic Violence.
Created: 17/06/2025 Updated: Never

Proactive Cybersecurity for those at risk of domestic violence.
Created: 17/06/2025 Updated: Never

On 25 March 2024, small Sydney construction firm Calida Projects was breached by ransomware group Akira. While the incident hasn't made headlines, it highlights a harsh reality: small businesses often lack the resources, expertise, and incentives to prioritise cybersecurity. With weak defences and limited capacity to respond, breaches like this remain common—and largely invisible.
Created: 25/03/2024 Updated: Never