#0005 The Great Chiasmus, AI tools may soon manipulate people's online decision-making, The Unbelievable Scale of AI’s Pirated-Books Problem, SLOW AI



Welcome to Constant Flux, a weekly lens taking a systemic view on the polycrisis.

Something is happening. As we optimize our bodies like startups and outsource more of our thinking to machines, the boundary between human and artificial starts to blur.

Todays digest is a collection of signals from the frontlines of AI’s influence. From Marie Dollé’s poetic warning about our disinhabited selves, to real-time AI systems nudging our choices, to Big Tech looting libraries to feed their models.

This is us entering the Age of Aquarius, a shift into a new era where old paradigms fade, collective consciousness expands, and technology forces us to rethink what it means to be human in real time.  It's only going to get weirder

The Great Chiasmus

This century does not dehumanize us. It disinhabits us.⁠
The body is no longer a temple; it’s a startup chasing optimization, hacking itself for efficiency

​In this, quite poetic piece, Marie Dollé observes a significant shift: as we humans immerse ourselves in digital spaces, screens and optimisation algos, distancing us from our physical forms, machines like Atlas, Figure-01, Clone Alpha, and Protoclone are increasingly adopting human-like physical attributes.

She warns that this transformation is not a natural evolution but a deliberate (and I'd add opportunistic) move by big tech, leading to the loss of our human essence.

It's time to critically examine the forces behind this change and its implications for our future. ​ Start by Gazing upwards, not downwards.

AI tools may soon manipulate people’s online decision-making, say researchers

Fancy living in a near-future where AI doesn’t just track your behavior but also learns your intentions and subtly nudges them through hyper-personalized interactions?

You're in luck! It's actively being built. At this very moment.

If platforms start selling real-time access to your changing thoughts, your autonomy turns into currency and someone else gets paid. Influence, optimized and invisible.

Don’t worry. You won’t notice a thing...

The Unbelievable Scale of AI’s Pirated-Books Problem

Meta using LibGen to train Llama 3 just goes to show how the big AI (is that a thing?) are now in a full-blown arms race. Grabbing as much data as they can before they have to resort to synthetic data, asking permission later, not asking for permission.

This is just plain theft. The work of a usurper, built on the unpaid labor of writers. Meta can have all the ethical checks and legal frameworks it wants, but none of it matters if “MZ” gives the green light.

This violent behavior reminds of the machine that "sucked" words out of books in Vernor Vinge's Rainbows End. (incredibly book btw)

SLOW AI

Slow AI is a a concept that plays with the intentional and deliberative use of AI in design asking some pretty good questions about where it's all going.

Though honestly, I'm mostly here for the delightfully playful way the material is put together. Makes for a nice contrast to the university Ethics in AI course I attended last semester. For more delightful websites like SLOW AI I recommend https://diagram.website/

Ripples


Docs - a governmental french/german alternative to Notion
France making its own version of Notion is a tiny signal of how things might change, at least with this kind of software. Trust in big US tech is fading, and Europe wants more control.

The AI Scientist Generates its First Peer-Reviewed Scientific Publication
This publication authored by Sakana’s AI suggests a future where artificial agents contributes to knowledge production.

Navigating the AGI discourse in foresight
Copenhagen Institute for Futures Studies wants foresight practitioners to stop getting lost in AGI fantasyland and instead stick to clear thinking that helps us plan and act.