Incrementally Better #3
On our place in the universe and our inability to predict the AI future
A pale blue dot
I am convinced we would have different politics if our politicians regularly visited a planetarium. Just to be reminded of how tiny our planet is compared to the enormity of our galaxy … and how small and silly our conflicts seem.

How can you look at the famous pale blue dot photo above and think: Yes, our planet is possibly the only place with intelligence in this vast universe, so what it needs is more grievance-based division and less global governance.
P.S. Carl Sagan’s beautiful reflection on the photo is well worth your time:
Uplifting content
Bit of a palette cleanser before we jump into the main bit:
Uplifting news: It was heartening to see 7 million people protesting peacefully during the ‘No King’s protest. We have become a bit complacent and take our freedoms and democracy for granted in the west, but the last ten years have shown that we need to fight for them.
Sound bite: Groove Armada - Paper Romance (Purple Disco Machine Remix)
Eye candy: Christopher Fisher’s beautiful dune photo (Bluesky):
The future of AI is full of surprises - because we are bad at predicting it
When will the ‘AI bubble’ burst? A question you will inevitably encounter on your favourite tech podcasts, LinkedIn feed or on Bluesky - which you all should join, by the way1. There is a good reason for this question. The investments in (generative) AI and the current valuations for companies at the forefront of AI are mind-boggling. But the valuations are of course not just based on (the revenue of) today’s incremental products, such as Copilot, but rather on the promise of much more capable and autonomous AI the labs are working on (‘superintelligence’ or ‘AGI’).
There are plenty of sceptics arguing that we will be soon reaching a plateau as we run out of data and the required computing power becomes too expensive. However, so far, every time it looked like progress might be stalling, the AI labs pulled another rabbit out of the hat (e.g., chain-of-thought reasoning, multimodal learning, distillation). But the bigger question is whether this incremental progress will get us closer to AGI. Experts disagree, given the fundamental limitations of the LLM architecture (e.g., hallucinations, generalisation beyond training data, ‘learning on the job’). So how could we humble laypeople know?
Maybe we can’t, nor can the experts. The third episode of the (highly recommended) ‘Last Invention’ podcast is a good reminder that even experts have a terrible track record of predicting leaps in AI capabilities:
Deep Blue beating Kasparov in 1996: We were surprised.
AlphaGo defeating Lee Sedol with the famous move 37 in 2016: We were surprised.
The ChatGPT capabilities when publicly released in Nov 2022: We were surprised.
Time and again, we underestimated the capabilities of AI technology. So who knows, there may be another big leap, another surprise just around the corner - particularly given the gazillions poured into AI research.
What does this unpredictability mean for AI governance? Technical AI literacy is key. This sounds obvious, but keeping up to date with the ever-changing technology - and with legal and regulatory developments - is a tough gig. Secondly, our laws and risk management need to be nimble. In two years, LLMs may be outdated technology and a whole new architecture may take off. That’s why I am disappointed about the EU AI Act: It’s neither flexible nor risk-based enough (see Incrementally Better #1 for my critical assessment). In contrast, the NIST AI Risk Management Framework2 and ISO 42001 are flexible and technology-agnostic. Leveraging these frameworks is a great way for organisations to manage news risks related to different AI architectures … and be prepared for any surprises.
Recommended reading and listening
Reading: Freevacy Newsletter
Other training providers are available etc. pp., but I really like Freevacy’s weekly newsletter. They consistently deliver a great mix of topics - including those that do not get (but should) a lot of airtime elsewhere.
You can find the feed and subscribe here: https://www.freevacy.com/news
Reading: All the AI topics in one place
Eli Pariser’s report from the AI conference ‘The Curve’ covers a lot of the ongoing discussions in the field of AI including the speed of technical progress, job displacement, the path to AGI and AI politics.
Reading: A book for language-lovers
Those who follow me on Bluesky will know that I love learning (about) languages. Despite having lived in the UK for more than 17 years, my English is still not on the same level as my German. But that’s also an opportunity, since there is still a lot to learn about the many wonderful particularities of the English language.
If you share the passion for languages, then I have a great book for you: Joshua Blackburn’s ‘The Language Lover’s Lexipedia’. From double negatives (lawyer’s favourites?), over an explanation of ‘word rage’ to hacker jargon, the book explores lots of linguistic curiosities. Perfect book to put on your coffee table and occasionally open on a random page to educate and entertain yourself.
https://www.bloomsbury.com/uk/languagelovers-lexipedia-9781526689368/
Reading: The fragmented EU AI Act supervision
This interesting study documents how EU countries have been designating their EU AI Act authorities. The study shows (including through illuminating charts) the fragmented approach to surveillance authorities - with up to 13 different authorities in a country. Another focus is the approach to the role of data protection authorities in the EU AI Act supervision (Germany excluding DPAs, France giving CNIL a prominent role).
https://ai-regulation.com/designation-of-national-competent-authorities-under-the-eu-ai-act/
Reading: No GDPR reform until we have better data?
We have recently had two heavy-weights weighing in on the GDPR simplification: An OUP guest editorial by Helen Dixon and an FPF article by Christopher Kuner.
Helen Dixon’s editorial on the effectiveness and efficiency of the GDPR is an interesting piece … even though I am not entirely sure about the key message. Is it that we should not tinker with the GDPR before we have more data on its effectiveness? I am fully on board with Dixon’s call for more scientific evaluation of the GDPR (same for every law!). But I don’t think we should let the lack of data get in the way of GDPR enhancements.
It would have been interesting to explore other legal areas that have conducted successful scientific evaluation and assess what we could learn and apply from the research for data protection (the suggested control groups seem an unachievably high standard?). It’s also a bit surprising that Dixon - with all her experience - doesn’t mention the funding issue (which I raised in the last Incrementally Better). But her main point stands: More research and data on GDPR effectiveness is needed3.
https://academic.oup.com/idpl/advance-article/doi/10.1093/idpl/ipaf019/8285716
On to Christopher Kuner’s article. I am generally a big fan of Christopher Kuner, but I was disappointed that this article devoted so many paragraphs to chastising Draghi for overly focusing on the economic aspects - surely, that’s Draghi’s role?
I am also not fully on the same page regarding the approach to GDPR reform. Yes, any changes should be transparent and not lower the level of protection. I also like the idea of evidence-based reform, but same as for Helen Dixon’s editorial, the lack of detailed data shouldn’t be an excuse to miss the opportunity to enhance the GDPR.
https://fpf.org/blog/the-draghi-dilemma-the-right-and-the-wrong-way-to-undertake-gdpr-reform/
Listening: Children learn through play, and adults play through art
Let’s end with somewhat lighter fare. I hugely enjoyed this fascinating Ezra Klein Show interview with Brian Eno on our relationship with art and generative AI, why feelings are underrated and much more. A wonderful conversation.
I have created starter packs that make it easy for you to get folks to follow and get started. Join a small, but determined group of privacy, security and AI folks trying to recreate the spirit of peak Twitter.
A word of caution: Not as actively updated anymore under the Trump administration. See warning on the website.
But please, please with more scientific rigour than the CNIL “study” on GDPR security (https://www.cnil.fr/en/cybersecurity-economic-benefits-gdpr). Try and feed it into your deep research generative AI tool of choice and ask for an objective assessment of the scientific methods if you don’t believe me.



Your English is great, don't sell yourself short -- I think I'm going to add your recommendation to my book list as I painfully slog my way through learning Dutch :)
As for hoping for nimble laws -- I wouldn't hold my breath. Lawyers are sadly, some of the worst when it comes to keeping abreast of new changes or rethinking approaches. And they tend to be staunchly committed to preserving the status quo (and all the cruft). There's always hope, of course, but it's good to set expectations low to avoid constant disappointment.