Published: 2026-01-11

I often suffer from world-weariness. That aching, inescapable feeling that the world is somehow going wrong and there's nothing that I can personally do about it, and the general sense of emotional and cognitive fatigue that comes with those thoughts. I worry a lot about the world -- watching systems that I once trusted falter and fail into corrupt and fraudulent practice for some individualistic or dogmatic reason -- and the worst part is that you, dear reader, probably can't work out which systems I'm even referring to, because so many are simultaneously showing us their true colours.

And so, I declare that I am officially crashing out. My optimism is mostly shattered, my faith in humanity reduced to ashes, and my hope for the future mostly destroyed. The train is already off the rails and it's only a matter of time before we hit something.

None of this is to say that I plan to stop trying to make things better. If anything, I feel obligated to reduce the damages. It is simply that I no longer feel that we can avoid the worst of it.

This isn't going to be a happy or even a particularly productive rant, but writing down my thoughts is cathartic. Perhaps reading them will be for you, too.

Part 1: Fraud

I don't pretend to be an economist, or a philosopher, or in any way qualified to prescribe or describe the benefits of economic systems. I am a computer scientist, and my understanding of that science is intuitive, not necessarily well-read or nearly as deeply educated as many of my peers expect or believe me to be. And as a computer scientist, who has participated in our academic institutions and industries, I understand that our field is built primarily on one thing nowadays: fraud.

Computer systems exist to make our lives easier. The collection and aggregation of data makes information readily inspectable, making informed decisions easier and automation possible. Computers themselves and informatics are not inherently fraudulent, but very little built thereon or thereabout isn't.

How is it that a company with little to no evidence of competence or quality is worth tens or hundreds of millions? How is it that it is generally understood that computer security vendors are mostly fraudulent? How is it that companies' stock prices consistently go up after they induce major disruptions of global infrastructure, costing billions? How is it that, despite gross negligence and willful ignorance, companies that get breached, lie to customers about exposure, meaningfully destroy lives by way of their callous greed, always get away with it with zero losses?

The answer, sadly, is that this is how it has worked for many companies for a long time. Several times over, the global economy has been disrupted by confidence in and reliance on fraudulent systems. Despite that, we continue to build a world knowingly built on fragility, fraud, and valuelessness.

Fraud in my backyard

The worst part is that the systems with which I had hoped to escape this rat race have only proven to be slightly less fraudulent. In my field, computer testing and security, researchers I deeply respect and care about personally openly and giddily discuss misrepresenting their work, both in publications and in grant proposals. Why? Because they believe that they can't get published or funded otherwise.

There is a widespread belief that it is no longer possible to be published while maintaining openness, admitting weakness, discussing where our understanding ends. But it is the very belief and the actions people take under that belief that makes it so! Reviewers expect perfection in the pursuit of "prestige", and that very expectation undermines the quality and rigour of the works they publish. Authors, facing seemingly insurmountable odds, cheat: they hide or outright lie about their research process, ignore and omit weaknesses, and intentionally avoid questioning their own work to be able to claim that their efforts are complete.

Fraud in my neighbour's backyard

It is absolutely within my will and maybe even eventually within my power to effect change to the status quo of academic testing and security research. But the problems there are peanuts what's happening in the recent boom of "artificial intelligence".

It's no secret that artificial intelligence is a powerful field. Recent advancements have their applications, and have proven themselves in mathematics, pharmaceutical, and pathogenics research. But these applications are not necessarily a direct consequence of what people think of anymore when we say "artificial intelligence". That moniker lies with the chatbots and image engines which are just convincing enough to inspire trillions of dollars of investment.

And for what? What do these textual or image generation tools give us? And what will it cost us?

It's impressive that programmers can now "write more code" "more quickly". It's largely debated whether that's actually desirable, or even the case at all. Frankly, in my eyes, the problem is not that we do not have enough code; the problem is that we've already written so much with so little regard for its quality or purpose. As we are more able to produce code quickly with so little care, we will rely less on solid, community-developed solutions, turning to easy, barely "working" ones.

Companies and people which use these tools do not care about efficiency, rigour, correctness, or safety; they care about "making it work just well enough to satisfy the stakeholder" (often themselves). And that might be fine sometimes, sure, but when our whole economy is built on fraud, how long can we last? We already have technical debt and bitrotting codebases; what happens when nobody understands that code and it breaks? Rather than a system which acknowledges and remediates its rotten core just enough to get by as things break and deteriorate, we will build systems without understanding, lacking assurances, and are utterly incorrigible. That's not even addressing the harms it brings to interpersonal interactions, or how it enables us to further dehumanise each other as we use it to summarise, plagiarise, and hallucinate information we give to each other.

And that's just addressing the "good" uses of this technology. What is currently happening on X and the otherwise seemingly endless font of abuses associated with what these tools produce (completely unavoidably) is despicable beyond what can be put into words. I hold every individual who was aware of these consequences and chose to release these tools anyway in the highest of contempt, and every abuser of these systems in utter disgust. The collective harm of these tools has already outweighed any positive we could hope to extract, and this trend will simply continue.

This entire subindustry which now commands the discipline which I love is truly rotten, built upon theft, sold fraudulently, and is only capable of worsening the situations in which we find ourselves. It is built on the efforts of a system which is already fraudulent and corner-cutting, and will only exacerbate these problems further. The people making these tools, knowing that they misrepresent the quality of what they produce in a litany of ways, happily sell the means of our destruction, ravage the physical resources of the world and the computational resources of others, and giddily steal everything that isn't locked down in ways they haven't yet found ways around.

And the governments of the world stand back, watching, as they understand that anything they do to address it will cost them politically. Our field gleefully embraces the technology as it actively undermines all that is good with it. Academia utilises it despite the unenumerable ethical, scientific, and safety issues that come with.

Cowards.

Part 2: War

I'm American by birth, with military tradition in my upbringing. I actively seek to avoid military involvement of any kind. Non-diplomatic, non-collaborative action is always destructive; there is always loss where there could be collective gain.

Yet, I cannot deny its necessity; a world with no good people willing to fight is a world dominated by evil. I don't have the answers for where those thresholds lie. What I do know is this: there are powers in this world which actively seek the destruction of free thought and identity. These powers are growing, and institutions defending those freedoms are weakening or turning.

Politicians and pundits undermine personal freedoms in the name of safety, then ignore when that safety is undermined by politically "difficult" characters. Leaders undermine the sovereignty of allies in the name of national security, weakening the collective against true enemies. Good people fight amongst themselves in an attempt to figure out the best course of action, allowing those with ill intent to sabotage and weaken their efforts.

I don't see any advantage from being precise in what I fear. Those of you who know me know that I am deeply opposed to enabling government, military, or even activist use of offensive security technology; they often prove themselves irresponsible in their utilisation of these technologies. Everything I have worked on has been for the improvement, not undermining, of the safety and security of the general public. Yet, I am forced to ask myself, seeing what lies on the horizon, whether I would consider changing my stance.

I hate the idea of living in the world where the right thing to do is in violation of such a core principle. I especially hate that, despite everything, the greed, corruption, and selfishness in both my own country and others might force the world into a state where the only good choice is fighting, and not collaborating.

War in my backyard

But we aren't there yet. And in academia, we must refuse competition where there could be collaboration. It is still possible to cut out the bad actors and engage in true scientific inquiry, and not the chase for prestige in which we find ourselves.

Groups like ACM PROTECT must be more firm, and actually root out and exclude actors known to engage in scientific malpractice. Without real consequences, nothing will change.

And maybe ACM (the company, not the individuals who volunteer their time to support it) isn't the organisation capable of this task; a for-profit, afraid of lawsuits, cannot realistically be the be all end all. We must stigmatise and call out malpractice in our community. We must be willing to fight for quality, for rigour, and for truth and understanding. Otherwise, nothing will change in the end, and our field is doomed to forever question the legitimacy of every work rather than questioning for the sake of understanding and scientific curiosity.

Part 3: What remains

I worry for our world, our climate, our freedoms, and the next generations. On a smaller scale, I worry for my immediate academic and social communities. I worry to say everything I mean and, in many ways, that makes me a coward and a hypocrite. I worry that I'm too late to change anything, both in my communities and in the world.

There is an unbelievable amount of human suffering -- some within our reach to fix, some not -- caused by greed, neglect, and casual cruelty. There is nowhere to run that will avoid it or its subsequent effects. There is a crisis in my communities which challenge the foundation of what they stand for. There is a market failure in our future which will devastate our world. There will be wars, soon, that will test people both individually and collectively who never expected to see anything but peace.

I don't know what to do. But I know we'll do our best.