Truth in the Age of AI: Why Deepfakes and Sora Should Worry Us
In a distant past (almost 10 years ago—ouch), I was taking entrance exams for business school. Back then, I could still focus for four hours straight—for a single exam. One of the main subjects was philosophy, and that year’s theme was truth. The essay topic we had for the HEC exam was: “Twilight of Truth.” I loved that phrase.
Understand
27 nov. 2025
7 min



Thinking about it recently, it struck me how much it resonates with our current reality. When I saw that viral image of the Pope a few months ago, I believed it was real.

Pope Francis about to do a rap freestyle
Funny as it was, that image raises a lot of important questions.
Deepfakes and the Arrival of Sora
On February 15, 2024, OpenAI (the company behind ChatGPT) announced the upcoming release of Sora. A historic moment: the first generative AI that goes from text to video. In other words, it can create a video from a written prompt. After text (via ChatGPT) and image (via Midjourney), this is another massive leap forward.
Here’s an example shared by the company.
One thing’s clear: AI is advancing at a dizzying pace. Which isn’t surprising, given the massive investments behind it. OpenAI has already raised $11.3 billion—and is counting on even more very soon.
A fierce battle is underway with Google and other tech giants, so we can only imagine the escalation to come. Now, you might marvel at this—or worry. Today, I’d like to focus on the worrying part, from a specific angle.
With Sora, we’re entering a new era: ultra-realistic video creation is about to become commonplace. We’re talking weeks, not years. The immediate consequence of this widespread access? A dramatic leap in both the quality and scale of deepfakes, which have already been around for a few years.
A deepfake is an AI-generated video that overlays existing images and audio to make someone appear to say or do something they never actually did.

Take phishing scams by deepfake as an example—Cyber Preventys has flagged these. From what I’ve read, it’s still tricky to produce truly convincing deepfake videos in just a few minutes. But we’re likely very close.
The risks are many: manipulation, disinformation, defamation—just to name a few. Superstar Taylor Swift, for instance, was the target of a pornographic deepfake in early February. Disturbingly realistic images and videos circulated on X (formerly Twitter), with one of them racking up 47 million views. That highly publicized case is just the tip of the iceberg. Countless similar misuses are already happening.
Here’s a more absurd example: Thinking he was on a video call with his CFO and colleagues, an employee at a Hong Kong-based multinational transferred $25 million to scammers. This tech is inevitably going to accelerate the spread of fake information—at an unprecedented level of realism. Telling fact from fiction will get harder and harder.
Until now, manipulation took effort—time, skills, tools like Photoshop. That’s about to become instant and accessible to all. And for now, fact-checking tools aren’t keeping pace… Our relationship with truth was already changing before generative AI—but AI is hitting the gas pedal. By the way, Sora means “sky” in Japanese. With Sora and what follows, could truth be disappearing over the horizon?
A Relationship to Information Transformed by the Digital World
Long before generative AI exploded onto the scene, digital technology had already changed the way we relate to information, knowledge, and truth. We’re drowning in an ocean of content, constantly feeling more and more lost in the storm.

In a world overloaded with information, grabbing attention has logically become the new gold rush for tech giants. As our available attention span shrinks, everyone wants to grab it—at any cost (pun intended 💸).
And what works best? Sensationalism, controversy—in short, anything that triggers instant emotion and speaks to our primal instincts. This frantic chase for attention has turned those tactics into communication norms. The result? Truth is often sacrificed on the altar of virality, urgency, and emotional impact.

Information is now consumed in rapid-fire bursts, at the expense of depth and context. We tend to settle for surface-level knowledge, without digging into the full story. According to a 2022 Ipsos study, 36% of 16–30-year-olds say they get their news from TikTok at least once a day:

For almost every major topic, it’s now incredibly hard to make sense of things—let alone separate true from false. This information overload creates profound uncertainty. It fosters an environment where discernment—our ability to evaluate and fully understand—is rare.
Overwhelmed, we lose sight of what’s meaningful vs. trivial. The way we consume information now discourages deeper exploration or critical thinking. We stick to the surface.
And now, with generative AI: anyone can become a content factory, cranking out fake news in seconds.
Thinking about it recently, it struck me how much it resonates with our current reality. When I saw that viral image of the Pope a few months ago, I believed it was real.

Pope Francis about to do a rap freestyle
Funny as it was, that image raises a lot of important questions.
Deepfakes and the Arrival of Sora
On February 15, 2024, OpenAI (the company behind ChatGPT) announced the upcoming release of Sora. A historic moment: the first generative AI that goes from text to video. In other words, it can create a video from a written prompt. After text (via ChatGPT) and image (via Midjourney), this is another massive leap forward.
Here’s an example shared by the company.
One thing’s clear: AI is advancing at a dizzying pace. Which isn’t surprising, given the massive investments behind it. OpenAI has already raised $11.3 billion—and is counting on even more very soon.
A fierce battle is underway with Google and other tech giants, so we can only imagine the escalation to come. Now, you might marvel at this—or worry. Today, I’d like to focus on the worrying part, from a specific angle.
With Sora, we’re entering a new era: ultra-realistic video creation is about to become commonplace. We’re talking weeks, not years. The immediate consequence of this widespread access? A dramatic leap in both the quality and scale of deepfakes, which have already been around for a few years.
A deepfake is an AI-generated video that overlays existing images and audio to make someone appear to say or do something they never actually did.

Take phishing scams by deepfake as an example—Cyber Preventys has flagged these. From what I’ve read, it’s still tricky to produce truly convincing deepfake videos in just a few minutes. But we’re likely very close.
The risks are many: manipulation, disinformation, defamation—just to name a few. Superstar Taylor Swift, for instance, was the target of a pornographic deepfake in early February. Disturbingly realistic images and videos circulated on X (formerly Twitter), with one of them racking up 47 million views. That highly publicized case is just the tip of the iceberg. Countless similar misuses are already happening.
Here’s a more absurd example: Thinking he was on a video call with his CFO and colleagues, an employee at a Hong Kong-based multinational transferred $25 million to scammers. This tech is inevitably going to accelerate the spread of fake information—at an unprecedented level of realism. Telling fact from fiction will get harder and harder.
Until now, manipulation took effort—time, skills, tools like Photoshop. That’s about to become instant and accessible to all. And for now, fact-checking tools aren’t keeping pace… Our relationship with truth was already changing before generative AI—but AI is hitting the gas pedal. By the way, Sora means “sky” in Japanese. With Sora and what follows, could truth be disappearing over the horizon?
A Relationship to Information Transformed by the Digital World
Long before generative AI exploded onto the scene, digital technology had already changed the way we relate to information, knowledge, and truth. We’re drowning in an ocean of content, constantly feeling more and more lost in the storm.

In a world overloaded with information, grabbing attention has logically become the new gold rush for tech giants. As our available attention span shrinks, everyone wants to grab it—at any cost (pun intended 💸).
And what works best? Sensationalism, controversy—in short, anything that triggers instant emotion and speaks to our primal instincts. This frantic chase for attention has turned those tactics into communication norms. The result? Truth is often sacrificed on the altar of virality, urgency, and emotional impact.

Information is now consumed in rapid-fire bursts, at the expense of depth and context. We tend to settle for surface-level knowledge, without digging into the full story. According to a 2022 Ipsos study, 36% of 16–30-year-olds say they get their news from TikTok at least once a day:

For almost every major topic, it’s now incredibly hard to make sense of things—let alone separate true from false. This information overload creates profound uncertainty. It fosters an environment where discernment—our ability to evaluate and fully understand—is rare.
Overwhelmed, we lose sight of what’s meaningful vs. trivial. The way we consume information now discourages deeper exploration or critical thinking. We stick to the surface.
And now, with generative AI: anyone can become a content factory, cranking out fake news in seconds.
Thinking about it recently, it struck me how much it resonates with our current reality. When I saw that viral image of the Pope a few months ago, I believed it was real.

Pope Francis about to do a rap freestyle
Funny as it was, that image raises a lot of important questions.
Deepfakes and the Arrival of Sora
On February 15, 2024, OpenAI (the company behind ChatGPT) announced the upcoming release of Sora. A historic moment: the first generative AI that goes from text to video. In other words, it can create a video from a written prompt. After text (via ChatGPT) and image (via Midjourney), this is another massive leap forward.
Here’s an example shared by the company.
One thing’s clear: AI is advancing at a dizzying pace. Which isn’t surprising, given the massive investments behind it. OpenAI has already raised $11.3 billion—and is counting on even more very soon.
A fierce battle is underway with Google and other tech giants, so we can only imagine the escalation to come. Now, you might marvel at this—or worry. Today, I’d like to focus on the worrying part, from a specific angle.
With Sora, we’re entering a new era: ultra-realistic video creation is about to become commonplace. We’re talking weeks, not years. The immediate consequence of this widespread access? A dramatic leap in both the quality and scale of deepfakes, which have already been around for a few years.
A deepfake is an AI-generated video that overlays existing images and audio to make someone appear to say or do something they never actually did.

Take phishing scams by deepfake as an example—Cyber Preventys has flagged these. From what I’ve read, it’s still tricky to produce truly convincing deepfake videos in just a few minutes. But we’re likely very close.
The risks are many: manipulation, disinformation, defamation—just to name a few. Superstar Taylor Swift, for instance, was the target of a pornographic deepfake in early February. Disturbingly realistic images and videos circulated on X (formerly Twitter), with one of them racking up 47 million views. That highly publicized case is just the tip of the iceberg. Countless similar misuses are already happening.
Here’s a more absurd example: Thinking he was on a video call with his CFO and colleagues, an employee at a Hong Kong-based multinational transferred $25 million to scammers. This tech is inevitably going to accelerate the spread of fake information—at an unprecedented level of realism. Telling fact from fiction will get harder and harder.
Until now, manipulation took effort—time, skills, tools like Photoshop. That’s about to become instant and accessible to all. And for now, fact-checking tools aren’t keeping pace… Our relationship with truth was already changing before generative AI—but AI is hitting the gas pedal. By the way, Sora means “sky” in Japanese. With Sora and what follows, could truth be disappearing over the horizon?
A Relationship to Information Transformed by the Digital World
Long before generative AI exploded onto the scene, digital technology had already changed the way we relate to information, knowledge, and truth. We’re drowning in an ocean of content, constantly feeling more and more lost in the storm.

In a world overloaded with information, grabbing attention has logically become the new gold rush for tech giants. As our available attention span shrinks, everyone wants to grab it—at any cost (pun intended 💸).
And what works best? Sensationalism, controversy—in short, anything that triggers instant emotion and speaks to our primal instincts. This frantic chase for attention has turned those tactics into communication norms. The result? Truth is often sacrificed on the altar of virality, urgency, and emotional impact.

Information is now consumed in rapid-fire bursts, at the expense of depth and context. We tend to settle for surface-level knowledge, without digging into the full story. According to a 2022 Ipsos study, 36% of 16–30-year-olds say they get their news from TikTok at least once a day:

For almost every major topic, it’s now incredibly hard to make sense of things—let alone separate true from false. This information overload creates profound uncertainty. It fosters an environment where discernment—our ability to evaluate and fully understand—is rare.
Overwhelmed, we lose sight of what’s meaningful vs. trivial. The way we consume information now discourages deeper exploration or critical thinking. We stick to the surface.
And now, with generative AI: anyone can become a content factory, cranking out fake news in seconds.

Votre téléphone, vos règles. Bloquez ce que vous voulez, quand vous voulez.
Pour 30 minutes
Tous les jours
Le week-end
Pendant les heures de travail
De 22h à 8h
Pour 7 jours
Tout le temps

Votre téléphone, vos règles. Bloquez ce que vous voulez, quand vous voulez.
Pour 30 minutes
Tous les jours
Le week-end
Pendant les heures de travail
De 22h à 8h
Pour 7 jours
Tout le temps

Votre téléphone, vos règles. Bloquez ce que vous voulez, quand vous voulez.
Pour 30 minutes
Tous les jours
Le week-end
Pendant les heures de travail
De 22h à 8h
Pour 7 jours
Tout le temps
Post-Truth and the Wisdom Gap

The July 8, 2023 cover of Der Spiegel reads: “The End of Truth.”
Here’s the basic definition of post-truth :
“Circumstances in which objective facts have less influence on public opinion than appeals to emotion and personal belief.”
Fundamental questions arise about how we form our understanding of the world in the age of digital dominance. In a post-truth world, doubt and relativism reign. It feels like this phenomenon is resurfacing—this growing inability to agree on shared truths. In this era of widespread doubt, individuals increasingly shape reality through the lens of emotion.
“Cognitive bubbles” form: algorithms serve us a worldview tailored to our beliefs. The “facts” we see only reinforce what we already believe. The real danger, as Emmanuel Brochier puts it:
“Post-truth is when truth becomes a relic of the past. The tipping point isn’t when truth is falsified, but when it becomes irrelevant.”
So the issue may no longer be just about recognizing the truth—but about our willingness to seek it out. We stop pursuing truth, and instead focus on expressing something, proclaiming something, to the world. A symbolic stat, from a feature on the Instagram-ification of travel: 40.1% of 18–33 year-olds choose a vacation destination for its ‘Instagrammability’.

If we still want to get closer to the truth, critical thinking and intellect are essential allies. Unfortunately, those are precisely the things being eroded by the attention crisis. The digital age, with its promise of unlimited access to knowledge, was supposed to create a more informed, rational society. Instead, it undermines the very skills we need to analyze and contextualize.

The Wisdom Gap is a concept I really like. It refers to the growing divide between:
Ever more complex, interconnected problems
And our decreasing ability to solve them
Technology isn’t the only reason for this gap, but its breakneck acceleration is making it worse. Even when tech is designed with good intentions, it amplifies uncertainty. And wisdom sits at the top of the DIKW pyramid:

Today, information is abundant, knowledge is rare—and wisdom is the exception. Wisdom is the ability to make sound judgments, with ethical, moral, and societal awareness. A crucial quality in facing this century’s crises—yet digital culture often pulls us further from it. Many argue that AI, like all technology, is neutral—that everything depends on how we use it.
But as Victor Fersing recently put it:
“Technological innovation encodes practices and values into the societies that adopt it. When a new technology is adopted, it interacts with human society—a complex system. So the effects of technology are deeply uncertain.”
I agree. That’s why I’m not a techno-utopian. It’s hard for me to see all the opportunity without also considering the potential costs to humanity. Technical progress ≠ human progress.
🌪 The Eye of the Storm
Nearly 6 in 10 French people say they feel “overwhelmed by information” to the point that it prevents them from gaining perspective. It’s hard not to feel disoriented in this chaos. I feel it too.
The idea of twilight can be viewed as an inevitable decline—and we could just resign ourselves to it. Or, we can see it as the moment when you can look at the sun without burning your eyes. Your worldview is shaped by the information you absorb—and the attention you give it. So here’s the deal: limit the quantity, choose quality, and go deep. Feed your mind without drowning it.
I used to be on Twitter daily. Now, I spend far less time there. To stay informed, I avoid most social platforms, and focus on long-form, thoughtful, structured sources:
Books
Podcasts
Documentaries / YouTube
Newsletters
I try not to fall into the trap of immediacy. Yes, I miss some updates—but I’ve realized that’s okay. I like to talk about the eye of the storm: the calm center of chaos, where you can observe the whirlwind without being swept away. It’s a shift in posture—a step aside.

Jenny Odell, in How to Do Nothing, describes this shift as moving from FOMO (fear of missing out) to NOMO (necessity of missing out). Rather than a loss, NOMO can bring clarity—and cognitive sanity—in today’s world. Twilight can also be a moment of peace, where we quietly contemplate the light.
How to Switch to NOMO or JOMO Mode?
If you’re looking for help—and a little push—to disconnect, we recommend the Jomo app. It’s available for free on iPhone, iPad, and Mac.
With Jomo, the goal isn’t to ban phone use altogether, but to use it more intentionally and mindfully. To stay in control—and not become a modern slave to that little device in your pocket. One of our favorite features in Jomo is a rule called “Conscious Use”, and it works incredibly well.
The idea is simple: by default, your distracting apps are blocked. To use them, you’ll need to ask Jomo for permission. You’ll be prompted to explain why you want to use the app—and for how long. It’s a powerful way to create distance without frustration, and to regulate your screen time without going cold turkey.
Here’s how to get started:
Go to the “Rules” tab
Scroll down to the “Templates” section and tap “Conscious Use”
Add the apps you want to block, and you’re all set!


Post-Truth and the Wisdom Gap

The July 8, 2023 cover of Der Spiegel reads: “The End of Truth.”
Here’s the basic definition of post-truth :
“Circumstances in which objective facts have less influence on public opinion than appeals to emotion and personal belief.”
Fundamental questions arise about how we form our understanding of the world in the age of digital dominance. In a post-truth world, doubt and relativism reign. It feels like this phenomenon is resurfacing—this growing inability to agree on shared truths. In this era of widespread doubt, individuals increasingly shape reality through the lens of emotion.
“Cognitive bubbles” form: algorithms serve us a worldview tailored to our beliefs. The “facts” we see only reinforce what we already believe. The real danger, as Emmanuel Brochier puts it:
“Post-truth is when truth becomes a relic of the past. The tipping point isn’t when truth is falsified, but when it becomes irrelevant.”
So the issue may no longer be just about recognizing the truth—but about our willingness to seek it out. We stop pursuing truth, and instead focus on expressing something, proclaiming something, to the world. A symbolic stat, from a feature on the Instagram-ification of travel: 40.1% of 18–33 year-olds choose a vacation destination for its ‘Instagrammability’.

If we still want to get closer to the truth, critical thinking and intellect are essential allies. Unfortunately, those are precisely the things being eroded by the attention crisis. The digital age, with its promise of unlimited access to knowledge, was supposed to create a more informed, rational society. Instead, it undermines the very skills we need to analyze and contextualize.

The Wisdom Gap is a concept I really like. It refers to the growing divide between:
Ever more complex, interconnected problems
And our decreasing ability to solve them
Technology isn’t the only reason for this gap, but its breakneck acceleration is making it worse. Even when tech is designed with good intentions, it amplifies uncertainty. And wisdom sits at the top of the DIKW pyramid:

Today, information is abundant, knowledge is rare—and wisdom is the exception. Wisdom is the ability to make sound judgments, with ethical, moral, and societal awareness. A crucial quality in facing this century’s crises—yet digital culture often pulls us further from it. Many argue that AI, like all technology, is neutral—that everything depends on how we use it.
But as Victor Fersing recently put it:
“Technological innovation encodes practices and values into the societies that adopt it. When a new technology is adopted, it interacts with human society—a complex system. So the effects of technology are deeply uncertain.”
I agree. That’s why I’m not a techno-utopian. It’s hard for me to see all the opportunity without also considering the potential costs to humanity. Technical progress ≠ human progress.
🌪 The Eye of the Storm
Nearly 6 in 10 French people say they feel “overwhelmed by information” to the point that it prevents them from gaining perspective. It’s hard not to feel disoriented in this chaos. I feel it too.
The idea of twilight can be viewed as an inevitable decline—and we could just resign ourselves to it. Or, we can see it as the moment when you can look at the sun without burning your eyes. Your worldview is shaped by the information you absorb—and the attention you give it. So here’s the deal: limit the quantity, choose quality, and go deep. Feed your mind without drowning it.
I used to be on Twitter daily. Now, I spend far less time there. To stay informed, I avoid most social platforms, and focus on long-form, thoughtful, structured sources:
Books
Podcasts
Documentaries / YouTube
Newsletters
I try not to fall into the trap of immediacy. Yes, I miss some updates—but I’ve realized that’s okay. I like to talk about the eye of the storm: the calm center of chaos, where you can observe the whirlwind without being swept away. It’s a shift in posture—a step aside.

Jenny Odell, in How to Do Nothing, describes this shift as moving from FOMO (fear of missing out) to NOMO (necessity of missing out). Rather than a loss, NOMO can bring clarity—and cognitive sanity—in today’s world. Twilight can also be a moment of peace, where we quietly contemplate the light.
How to Switch to NOMO or JOMO Mode?
If you’re looking for help—and a little push—to disconnect, we recommend the Jomo app. It’s available for free on iPhone, iPad, and Mac.
With Jomo, the goal isn’t to ban phone use altogether, but to use it more intentionally and mindfully. To stay in control—and not become a modern slave to that little device in your pocket. One of our favorite features in Jomo is a rule called “Conscious Use”, and it works incredibly well.
The idea is simple: by default, your distracting apps are blocked. To use them, you’ll need to ask Jomo for permission. You’ll be prompted to explain why you want to use the app—and for how long. It’s a powerful way to create distance without frustration, and to regulate your screen time without going cold turkey.
Here’s how to get started:
Go to the “Rules” tab
Scroll down to the “Templates” section and tap “Conscious Use”
Add the apps you want to block, and you’re all set!


Post-Truth and the Wisdom Gap

The July 8, 2023 cover of Der Spiegel reads: “The End of Truth.”
Here’s the basic definition of post-truth :
“Circumstances in which objective facts have less influence on public opinion than appeals to emotion and personal belief.”
Fundamental questions arise about how we form our understanding of the world in the age of digital dominance. In a post-truth world, doubt and relativism reign. It feels like this phenomenon is resurfacing—this growing inability to agree on shared truths. In this era of widespread doubt, individuals increasingly shape reality through the lens of emotion.
“Cognitive bubbles” form: algorithms serve us a worldview tailored to our beliefs. The “facts” we see only reinforce what we already believe. The real danger, as Emmanuel Brochier puts it:
“Post-truth is when truth becomes a relic of the past. The tipping point isn’t when truth is falsified, but when it becomes irrelevant.”
So the issue may no longer be just about recognizing the truth—but about our willingness to seek it out. We stop pursuing truth, and instead focus on expressing something, proclaiming something, to the world. A symbolic stat, from a feature on the Instagram-ification of travel: 40.1% of 18–33 year-olds choose a vacation destination for its ‘Instagrammability’.

If we still want to get closer to the truth, critical thinking and intellect are essential allies. Unfortunately, those are precisely the things being eroded by the attention crisis. The digital age, with its promise of unlimited access to knowledge, was supposed to create a more informed, rational society. Instead, it undermines the very skills we need to analyze and contextualize.

The Wisdom Gap is a concept I really like. It refers to the growing divide between:
Ever more complex, interconnected problems
And our decreasing ability to solve them
Technology isn’t the only reason for this gap, but its breakneck acceleration is making it worse. Even when tech is designed with good intentions, it amplifies uncertainty. And wisdom sits at the top of the DIKW pyramid:

Today, information is abundant, knowledge is rare—and wisdom is the exception. Wisdom is the ability to make sound judgments, with ethical, moral, and societal awareness. A crucial quality in facing this century’s crises—yet digital culture often pulls us further from it. Many argue that AI, like all technology, is neutral—that everything depends on how we use it.
But as Victor Fersing recently put it:
“Technological innovation encodes practices and values into the societies that adopt it. When a new technology is adopted, it interacts with human society—a complex system. So the effects of technology are deeply uncertain.”
I agree. That’s why I’m not a techno-utopian. It’s hard for me to see all the opportunity without also considering the potential costs to humanity. Technical progress ≠ human progress.
🌪 The Eye of the Storm
Nearly 6 in 10 French people say they feel “overwhelmed by information” to the point that it prevents them from gaining perspective. It’s hard not to feel disoriented in this chaos. I feel it too.
The idea of twilight can be viewed as an inevitable decline—and we could just resign ourselves to it. Or, we can see it as the moment when you can look at the sun without burning your eyes. Your worldview is shaped by the information you absorb—and the attention you give it. So here’s the deal: limit the quantity, choose quality, and go deep. Feed your mind without drowning it.
I used to be on Twitter daily. Now, I spend far less time there. To stay informed, I avoid most social platforms, and focus on long-form, thoughtful, structured sources:
Books
Podcasts
Documentaries / YouTube
Newsletters
I try not to fall into the trap of immediacy. Yes, I miss some updates—but I’ve realized that’s okay. I like to talk about the eye of the storm: the calm center of chaos, where you can observe the whirlwind without being swept away. It’s a shift in posture—a step aside.

Jenny Odell, in How to Do Nothing, describes this shift as moving from FOMO (fear of missing out) to NOMO (necessity of missing out). Rather than a loss, NOMO can bring clarity—and cognitive sanity—in today’s world. Twilight can also be a moment of peace, where we quietly contemplate the light.
How to Switch to NOMO or JOMO Mode?
If you’re looking for help—and a little push—to disconnect, we recommend the Jomo app. It’s available for free on iPhone, iPad, and Mac.
With Jomo, the goal isn’t to ban phone use altogether, but to use it more intentionally and mindfully. To stay in control—and not become a modern slave to that little device in your pocket. One of our favorite features in Jomo is a rule called “Conscious Use”, and it works incredibly well.
The idea is simple: by default, your distracting apps are blocked. To use them, you’ll need to ask Jomo for permission. You’ll be prompted to explain why you want to use the app—and for how long. It’s a powerful way to create distance without frustration, and to regulate your screen time without going cold turkey.
Here’s how to get started:
Go to the “Rules” tab
Scroll down to the “Templates” section and tap “Conscious Use”
Add the apps you want to block, and you’re all set!


Credits
This article is a revised version of Edition #26 of the Screenbreak newsletter created by Julien Rousset. With his permission, we're sharing this high-quality content with you today! So many thanks to Julien. 😌
Photographies by Unsplash, Dall-e, ScreenBreak and the Internet.
[1] Lemoigne - La post-vérité, ou comment l’émotion prévaut sur les faits, Le Figaro, 2023.
[2] Benson - Humans Aren’t Mentally Ready for an AI-Saturated ‘Post-Truth World’, Wired, 2023.
[3] Der Spiegel - Une du jour. L’intelligence artificielle signe “la fin de la vérité”, Courrier International, 2023.
[4] WGSN Insider - Vrai ou faux ? L'IA et la montée du monde de la post-vérité, WGSN, 2023.
[5] Gomede - The DIKW Pyramid: Navigating the Information Landscape, Medium, 2023.
[6] Bertolucci - "Deep fakes", détournements massifs, infox en roue libre… pourquoi Sora d'OpenAI va faire buguer le réel, Marianne, 2024.
[7] Deep Fake, Wikipedia.
[8] The Wisdom Gap, Center for Humane Technology.
Continue reading
Continue reading
The Joy Of Missing Out

Développé en Europe
Tous droits réservés à Jomo SAS, 2025
The Joy Of Missing Out

Développé en Europe
Tous droits réservés à Jomo SAS, 2025
The Joy Of Missing Out

Développé en Europe
Tous droits réservés à Jomo SAS, 2025


