30,000 fired by email while AI eats the budget
play Play pause Pause
S1 E34

30,000 fired by email while AI eats the budget

play Play pause Pause

Just a little bit?

Okay.

Hi everyone.

Welcome to the Bright Signal Podcast where we cut through the noise and bring you the
latest tech news and interviews.

My name is Murilo.

I'm joined by my friend Bart.

Hey Bart and Rafael.

Hey Rafael.

How are we doing?

Good, good, good, good, good.

I'll get to you Raphael in a second, I think it's the first time maybe people are hearing
about you, but maybe quickly before.

Also we have a new intro, right?

Some new changes, think Bart and I, alluded to it in the previous episodes.

And a new name, exactly.

yeah, indeed, indeed.

Maybe I'll let Raphael introduce himself first and then we can talk about like what can
people expect going forward.

But maybe Raphael, who are you?

So yeah, thank you, Moïlie, for the introduction and also Bart.

I'm really happy to be there for the first time.

And for those of you who don't know me, which means 100 % of the audience, except my mom,
if she's listening, I'm working in finance mainly, but also teaching economics at an

university here in Belgium.

and also passionate about tech, technology as a whole.

I'm not as techie as Bart and Morello, you will see it in the discussions.

But I like to keep myself updated on mostly the entrepreneurial ecosystem, but also tech
in economics, tech in finance, or finance in tech.

It depends on which way you are seeing it.

And also I had to find an excuse to be on the podcast.

So yeah, that's me and I'm really happy to go bananas on about Tech News with you guys.

Welcome, Rafael.

Thanks.

indeed, indeed, indeed.

Happy to have you.

And I think also I think this is a nice addition.

I think you mentioned maybe you're not as good with or not as as familiar with all the
techie things, but on the other side, you're definitely way more familiar than I am with

all the financial news and all these different things.

Exactly, exactly the money.

So now we're going to be rich Bart.

We're going to be rich.

oh

speak only about the money, but yeah, that's the goal.

For sure, for sure.

But then now we have also a new name, Bright Signal Podcast.

What is the Bright Signal Podcast?

Maybe I'll ask you now Bart.

that's a bit unprepared as you ask me.

I should have prepared this a bit better.

we've been thinking a bit about how to evolve going forward from the Monkey Patching
podcast, right?

I think we started the Monkey Patching podcast very, maybe very techie and then slowly
evolved a bit more to global data and AI news.

came a bit of a combination of more the global news and like small tech updates.

think the speaking a bit for you as well here, Marilo, I think we enjoyed what we were
doing.

I think we also had some humble successes in terms of subscriber count and stuff.

But we also saw like there's like it's a very crowded space, right?

Like a lot of people are doing global tech and AI news because just simply because it's so
hot these days.

We started thinking about we can differentiate ourselves.

What Bright Signal is, bit of a...

Try to differentiate more by having a more complementary co-host team.

Where Raphael brings a lot of background from the more financial and commercial side of
things.

But also we have a slightly different focus where we don't only just do news, we will
still do news roughly one or two times a month, still to be the title, exactly the

schedule where we have very consciously split the news and global news and EU news.

I think that is a big thing.

So we'll cover the global news, EU news, but also like still small tech innovations.

during our updates and all the other sessions we try to focus on interviews with tech
startups from Europe or investors in the broad sense of the word whether it be VCs,

whether it's an angel, whether to also get a bit their point of view on investing in the
European ecosystem.

I think that kind of summarizes it.

Anything to add to that, Marjole?

Yeah, no, I think just echo what you say, but also to, I think it also comes from, we're
interested in people that are passionate, that have nice ideas as well, and also focus a

bit on the EU, right?

I think we're all in Belgium, and we also noticed that a lot of these news are in the US.

Yeah, but at the same time there is a lot of stuff that happens in the EU and we also
wanted to shed a light there, right?

So I think that's also a bit the goal here.

So I'm actually very excited today.

It will be the news.

think listeners are going to notice slight change in format, but I think largely the
content is the same.

think like you said, it's some things that we're already discussing.

I'm happy to have Rafael now because there's a lot of stuff about the big numbers we saw
from OpenEye and DropPick, all these different players, you know, that I was always

a bit like what does this mean I kind of get it but like so I'm also happy to have Raphael
joining us and yeah so we're already talking to we're already planning some some startups

as well so very very much looking forward to to what's coming but without further ado we
should just kick it off who would like to start

Chinese AI models from labs like DeepSeek and Minimax have overtaken US arrivals in token
consumption on OpenRouter since February.

Driven by prices as low as 2-3 dollars per million output tokens compared to roughly 15
dollars for Anthropix Cloud Sonnet 4.5.

The cost advantage stems from cheaper energy and more efficient model architectures and
its reshaping developer behavior.

One Hong Kong developer now routes 80 % of his work through Moonshot's Kimi model to avoid
spending 900 dollars a day on Cloud alone.

Alibaba has moved to capitalize, creating a new Alibaba Token Hub business group led by
CEO Eddie Wu.

betting that token economics will define the next phase of AI competition.

China on the rise.

Yeah, maybe we should explain what's an AI token.

Did you know that before the article?

we use a lot so I think yeah we consume a lot of AI tokens yeah yeah so yeah but like
maybe yeah if you like to explain what is what is an AI token

So personally, as I'm not a developer, I don't really use AI.

I'm just like a subscriber of AI models, et cetera.

So yeah, so I guess that AI is just the price of AI tokens or the price of AI usage.

So basically it's a fuel.

It's the fuel for AI.

So each time you just type on chat GPT or cloud, hello, how are you?

It just costs AI tokens.

Yeah, also the way we works a bit behind the scenes, right?

Is that you have text like this and the text is almost broken down into, it's almost like
syllables, but we call it like in machine learning, right?

If you're building these things, it's called tokenization.

So, and then for example, software can maybe be split into software.

This is just hypothetical.

I'm not saying this is how it is.

And each one of these things are converted to numbers that are sent to the machine.

But then each one of these things are tokens.

So these are the AI tokens and

when you pay for them, you pay for each token that you use.

So if you have a very big text that you want to send to models and the text that sent
back, basically those are the in and out tokens, right?

They actually have different prices.

And if you have the models that kind of talk to themselves, they think out loud, they also
produce those tokens in between, which also has a different price.

But basically if you have a lot of tokens, you spend more money.

And if the tokens are expensive, then these scales as well.

And I think this is also

what it's saying here like OpenAI and Tropic they have the today's the best models still
but the Chinese models they're actually they open source and they are way more token

efficient right they well I'm not sure if they're token efficient they're more cheap
they're cheaper per token that's for sure right and I think

They're not like in there.

think well looking a bit apart here to see if you agree with me I think they're not as
good as the closed models, but they're still pretty good right

Maybe just to add to the token stuff.

So just as a very, very, very rough proxy, you can think a bit about if you type, four
characters is roughly one token.

And typically models have a pricing on input tokens.

So how many characters do send to the model and how much comes out?

Those are output tokens.

And again, they're output tokens.

If it's one token, you get four characters out, roughly.

This is very rough.

And what we see in these is that China outputs a lot of, well, you call them open source,
you call them open weight models.

I think there's a bit of a debated topic on open sources in this context.

But they are cheaper to use, I would say.

Open router, sorry?

much cheaper compared to like the prices if you compare Minimax or Tropic I think it's
like, I don't know, 15 and what did they give numbers here?

Like two to three dollars versus 15.

So, yeah.

But the.

The question is a bit how much cheaper are they actually, right?

Like for a developer to use it, they are much cheaper.

If you use them through OpenRouter.

So OpenRouter is a bit of a system that lets you use a central entry point, a bit of a
proxy and that you can say, I want to talk to this model from Anthropic or I want to talk

to this model from DeepSeek or to this model from Alibaba or whatever.

like based on whatever model you talk to, you pay a cost, right?

It's very expensive to talk directly to Entropix models or to Gemini models or to OpenAI
models.

But I would argue that they also price them commercially.

There's also development costs on this, we need to take this into account.

While if you actually look at open router, where these other models are hosted, they are
typically just by server farms that are hosting these open weight models.

It's not...

deep-seek themselves often like it's not Alibaba themselves like these are server parts
that are hosting open-weight models and like they just need to make a margin on on the

compute right like they don't need to take into account the whole development so I think
it's logical to some extent that today they are cheaper right I think performance-wise

they they are very comparable to be honest

So the thing that we are developing where we go from capturing any type of information,
like the startup I'm working on, we capture any type of information, we try to structure

that automatically for you.

While we're in the test phase, we're also using OpenRoute, we can very easily switch to a
different model.

The difference is really minimal on a lot of different cases.

From the moment it becomes very complex, you tend to end up in the state of the art
models, but even there, the gap with the Chinese models is becoming very small.

Yeah.

So you're also doing what's written in the news.

it's the kind of the 80-20 rule.

So 80 % of the job is doable with Chinese AI application.

And then the rest, the most complex part can also be done by more complex AI, which could
cost more.

Which today still costs more.

But the question is whether it will still cost more tomorrow.

I think like what is the bigger thing here is more also the geopolitical competition that
is happening here.

Because what we only think that we see through OpenRouter is what it costs for these
server farms to basically host an open model.

That's what we see with OpenRouter.

But we don't really see the cost that...

these Chinese companies haven't trading these models.

Like it's very, it's not very transparent.

There's a lot of discussion that is also being sponsored by government.

don't, we don't really know.

but what you do see is that like, this is like, the U S has a very strong competitive
advantage on AI and everybody thought like a year ago, like it's miles ahead of everybody

else, but actually China is very, very close.

Right.

And it's right to.

by doing this, by serving all these things at a very low cost, is making people think
twice, right?

Like it's not, from the moment it's like 10 times as cheap, it's not something that you
don't think about anymore, right?

If one costs a dollar and the other costs 95 cents, then you think, well, no, let's do the
dollar because we know the context, it's Americans, you can probably trust this.

But from the moment that the price has become that big, like the stage, it's more
instrumental, right?

Yeah, for sure.

Also,

And also like very recently, right?

We're not going to cover this today, but the source code for cloud code also came out,
right?

And I think people are a lot of looking at the how the actual application works.

And I saw in a lot of different dissections, let's say that the model wasn't maybe as

there wasn't the key differentiator, let's say.

mean, they said the model of course is better, but they were also saying that there's a
lot of scaffolding around these applications, right?

And I mean, you can actually use today Minimax models, which actually I think is the most
popular, I wanna say.

Yeah, Minimax.

with cloud code as well, right?

You can also have a subscription for coding with Minimax and I actually see a lot of
people that are opting for this instead of Anthropic.

yeah, I think it's a, I'm not sure, like I'm wondering if they're still gonna be that much
ahead.

I mean, that much ahead, like you said, the gap is already bridging a bit, but how long
are they gonna stay ahead until people just say, okay, I'm just gonna go for the cheaper

one because it's good enough, quote unquote, right?

Yeah, I think that the US has a competitive advantage, as you said, Barton, more the
models as such.

But what's important in this article, and it shows a bit what's going on on the AI race,
is that sometimes it just comes down to one thing, which is the cost.

And in this case, it's the energy costs.

I think that China has...

a huge competitive advantage on this part.

We see huge investments in China on electricity generation, like with renewable energy.

There is an interesting figure for electricity generation of the US compared to China.

you look at the numbers of electricity generation in terawatts.

In 2010, China and the US was at the same level, meaning 4,000 terawatt an hour.

And right now, China has doubled its electricity generation, which is also an investment
in AI.

um

It has an efficient electricity grid, China, while the West...

Yeah, the infrastructure is a bit aging and I think that energy is something that most
people overlook, to be honest.

recently, it is also interesting, recently Mr.

Frank, so an Nvidia CEO, um compared the AI race to a five-layer escape.

At the top, there are the AI applications.

and then the LLMs, the infrastructure.

And at the very bottom, there is just the energy.

And this is part of the competition as well.

So yeah, I think that the rise of China is also due to its efficiency in producing energy.

Yeah, and I think maybe even there, like if you look at these five layers, like you start
with applications, you end up with energy.

But I would even argue that potentially at the application level there, China also has
some advantages, like there are way ahead in terms of robotics and these type of

applications, integrating into AI and home appliances goes very quickly.

a lot of the whole supply chain to even build this into hardware just only exists in
China.

It's interesting to see how this moves forward.

Indeed, Maybe moving on?

Yes, maybe Figma.

Figma introduced write access for AI agents via its MCP server, allowing tools like Cloud
Code and Cursor to design directly on the Figma campus.

Significantly improved upgrade from the previous read-only integration.

The feature means AI agents can now generate components, variables, and full screens using
a team's existing design system and real Figma primitives.

It's a strategic bet that could shape Figma's role in product development as AI agents
increasingly become the starting point for prototyping.

rather than Figma itself.

So maybe to start, what is Figma?

And maybe I'll, I think you have the most experience out of the three of us part with
Figma.

So I'll let you take this one.

Well, I'll explain from my own experience, it's probably not giving the full picture.

I would describe Figma as a mock-up and design tool.

Let's say if you want to start building an application, you can very quickly mock-up like
this is how the screen of the application could look like on Figma.

I think you can do way more with Figma, but to me that's a bit like the very logical use
case for Figma.

challenge that they have in the current climate is that it's a bit twofold.

But I'll maybe go into the MCP part first, because the article is a bit like, Figma
allowed read access to everything that they have on the platform via MCP.

That means through your chat GPT, through your cloud, through whatever AI tool you can
talk to it.

But you could only read from the platform.

A bit with the idea, and I think a lot of platforms

I Hope taught that, that their platform will still have a significant place in a new
world, right?

In an AI native world.

And I think what Figma now does is that they allow also write access, is basically that
they realize that users will maybe not use their...

platform like the front end to the platform anymore in the future and they will just use
an AI agent to do something on the platform.

Which is a big shift, right?

Like you have much less control over your user if it's just the agent of the user creating
something on your platform.

uh So I think that is interesting to think about like from if you're building a SaaS like

Does the visual entry point of that sounds like how can you still expect your users to
keep using that going forward or will they just ask a question on Chess GPT and Chess GPT

will read or write something to the platform.

Yeah, I think a lot of the times the entry point is becoming more and more of these
chatbots, right?

Clouds, Cursor, ChaiJPT.

But I also think it's, I think, well, maybe I'm looking a bit at you again, Bart, but I
think Figma was really like, someone would say, this is the UI I want, and then you give

it to a front-end developer, and then they'll go bananas and work and do the animations
and all these different things, right?

But I feel like now, maybe people, because Figma maybe wasn't playing ball, let's say, to
make it easier to play with the agents, now it

easier for someone to actually just talk to Claude and say build something like this in
HTML or whatever and then they will kind of do the same mock-up.

Exactly, exactly.

So I think that is the argument you can make that this is just a Figma specific case in a
sense that it has become very easy for everybody to just make a mock-up with either Claude

Cote or Lovable or whatever and you don't even need Figma anymore.

And I think maybe them not allowing people to write, like only read made it even worse for
them, right?

Like I think, so maybe that's also a bit them saying, okay, we're not playing hardball
anymore because we know it's a losing battle, right?

Maybe.

that Figma is becoming like just a supplier to Claude and Chattipiti for example.

Yeah, maybe it looks more like a plugin to Cloud than an actual platform itself.

So they lose basically all their direct relationship with the final users.

Well, that's what they are indeed at risk at, right?

I think the argument that you can make is that, like from a UX, UI design perspective, is
that companies still want some unified approach to this, hopefully.

And that is the value that Figma brings, that it allows you in the same way always to look
at new mock-ups, that there is a team way of working on this.

which you all need to define yourself if you do this in Loveable, of course, but the
arguments become much slimmer than they used to be, Because it used to be, Figma used to

be the king of mock-up design.

Yeah, exactly.

also to some extent pricing power to the also in the company, which is also for, I think
for SaaS, I think that valuations of SaaS are built on the pricing power, meaning that

it's built also on the stickiness of requiring revenue.

When you have a base of users, you just know that

for CC users to replace all the stuff and just go to another SAS, it will be a bit
complicated and costly.

Now you just need to ask CharsGPT and Cloud to do that, which is.

I think that's for Figma is a very challenging thing.

And because at one side, like not opening up full write access for their MCP, like it
potentially makes the users think like, yeah, why am I even paying for this?

I can also do this in love below whatever, like I maybe don't need Figma, but at the same
time, if they do open it up and start users like just instrument Figma using an AIJ agent,

then they're maybe going to think, yeah.

I mean, I'm just using it as a generative layer.

Like, is it even worth the $20 a month I'm paying per user per month?

So I think it's a very challenging situation that they're in.

It sounds a bit like a lose-lose.

Do you think Figma is the only case where a SaaS service can just not disappear but be
less useful?

Or does it mean that no SaaS could be ever made again?

No, no, I'm very, I'm very against the whole SaaS are not relevant anymore.

think SaaS are very relevant.

I think why Sigma specifically is under pressure is because what Gen.AI today is extremely
good at is the whole AI coding part.

And Figma is exactly on that path, like somewhere in between that path.

Like you start a project, you need Figma somewhere in between, and then you're actually
going to build the application.

They have the challenge that they're on a path where you actually don't really need them,
but it becomes a nice to have.

But I think a lot of SaaS platforms, they're crucial for compliance, they're crucial for
having very clear structure processes.

I think it's more of the specific field that Figma is in.

Yeah, I think I agree.

think it's, it's, think with AI, I think it changed a bit the software development life
cycle, let's Right.

And I think the other example of the giving the article is linear with the, which is like
a.

ticketing or like project management tool, right?

But I think when, because AI now changes the way we are developing software, then I also
think it's, people are rethinking how we do these things, right?

The very traditional, like you had this, you hand it to someone else and you have to make
this right in the UI because if you don't, then you're have to make changes and then it's

gonna be a lot of back and forth and sending over between teams.

That's not the case anymore, right?

So I think the software today that is,

the traditional software development lifecycle tools may need to be adapted a bit, right?

They may need to reinvent themselves.

And I think there may be new tools as well that are more appropriate for the new coding
era, let's say.

But it's also hard, I think, to tell today what's good and what's not good because there's
so many apps.

I think also AI coding made it easy for people to build apps.

So think there's a lot of noise, right?

So it's hard to find that bright signal.

Anyways, anything else you want to say here before I move on maybe?

Then what do we have next?

Raphael?

Oracle begins laying off up to 30,000 employees, roughly 18 % of its workforce, with
termination emails sent at 6 a.m.

local time across the U.S., India, Canada, and Mexico.

No prior warning from HR or direct managers.

The cuts are directly tied to Oracle's aggressive expansion into AI data center
infrastructure.

with the layoffs expected to free up $8 to $10 billion in cash flow to fund the build-out.

The move follows a pattern of massive tech layoffs this year, including $30,000 at Amazon
and a 40 % headcount reduction at Block.

That's not a nice way to start your day, right?

A 6 a.m.

email saying goodbye.

That's massive.

I think it's the biggest massive thing that's happened in the US workforce.

Okay, interesting.

This will never fly in the EU, I think.

Because they said it was like without prior warning or anything.

They just said your role is not useful anymore.

Because I think it was part of the argument is that it's a restructuring of the company,
right?

And then the roles are not required anymore.

So they just kind of say like, okay, we're done here.

Yeah, it's weird, huh?

I think also the only time I heard something like this was with Twitter, now X, right?

When Elon Musk started firing people.

It was just an email saying, it's like, okay, pack your things and you're done.

It's very, very rough, very strange.

And I think it was needed to free up cash to invest in AI infrastructure.

Is it the big AI replacement for you or is it just a way to free up cash and just to
invest in technology?

I don't think it's an AI replacement thing.

think what they're doing is that they're betting that their business will be AI compute
going forward.

And that this comes in a context where they took on a huge amount of new debt.

I think it's almost 60 billion earlier in the year.

And apparently their free cash flow went negative.

So there was already rumors that there were going to be layoffs.

So these are the layoffs that are happening because of that.

And I think this is because they think that these apparently the service departments that
they will not bring the value that they hope to bring over the coming years and that they

will just put all their their acts in the AI compute basket.

It's rumored that this frees up eight to 10 billion in annual cashflow.

But these 30,000 layoffs.

Yeah, I'm also, I yeah, I saw this year.

I mean, also on the article says indeed 58 billion in debt in just two months.

It also says that they tried to go to banks, but many banks reported stepping back from
financing this data center project.

So they're really trying to find money.

They probably already have debt.

And I mean here it says that 95 % jump in net income.

they apparently they do have money coming in.

They are checking loans and they are cutting people because they want to massively into
these data centers.

The thing for me is like, I feel like we've been hearing about this so much, right?

From Google, from Entropic, from OpenAI.

had, what's the name?

There was a government related thing, project, or Stargate, I want to say, right?

There was an OpenAI announcement.

a while ago with the administration.

I hear so much of this.

thing or I'm not sure what.

data centers.

I think but I don't think it was just open AI but open AI was a big player there as well.

And then like the funding got pulled.

Yeah.

And like I remember it got pulled when the Chinese models came out and the performance and
it disrupted and then I don't know but I feel like there's a lot of discussion on this.

I remember one time there was one article.

I don't remember if he covered it that it was like Anthropic investing on Google or
something like that.

It was just like one big play investing on another investment on another and it really
feels like it's a bubble that is going to burst.

because you have all these investments being made and it looks like there's a lot of money
circulating through everyone but it's really just that that group right and now oracle

apparently making that switch right do you i don't know i don't know how to feel about
this to be honest i'm not sure if it's

It's not even a distress company because they make good results, I think.

But I think they are just betting on AI.

They put their balance sheet at risk on AI and this is bet as Amazon is doing right now
with investment in open AI.

And I think that investors know that this is a bet when you see at the Oracle.

stock, it has been punished last year.

I think it lost more than 50 % of its value from September or November until now.

I think that investors are not really convinced.

I agree with Raffel, it looks like they're so confident in that AI will be the future that
they're willing to cut 18 % of their workforce to fund it basically.

And it's a bit like, instead of having the operational expense of all these people on
payroll, we are going to use this amount of money to put it into capital expenditure and

just see it as an investment going forward and hardware and computer we can buy with it.

And do you think it's confidence in AI or it's fear that they're gonna be irrelevant
because everyone else is moving AI?

Do think it's a push or a pull?

I think it's a combination, right?

Like you're preparing yourself for the future.

And everybody is going into that direction.

But I think in this ecosystem, there are a lot of different bets.

When you see even for Oracle in September 2025, they just made an announcement where they
will do a deal with OpenAI.

OpenAI would purchase 30 billion of dollars of computing power from Oracle.

people were just OK.

Thus, OpenAI actually has 30 billion of dollars to spend in computing power from Oracle.

That's the question.

Yeah, maybe one last thing before moving on.

Like these are the teams that were hit the hardest.

I was just wondering like if I could reading this, I would actually say, okay, they're
actually deparatizing this and prioritizing that.

Maybe just to read in the article is that Revenue Health Sciences, the SAS and Virtual
Operations Services and NetSuite's India Development Center.

Does this, like, I don't know, does this numbers tell you a story or?

Not really, because for me it was hard to see what this meant.

Well, probably it's bit easier to cut these people, also with the promise that a lot of
these service roles can potentially be automated.

So it's a bit maybe also de-risking there.

But maybe it's also like Oracle is built on a lot of legacy services, right?

Like maybe they have seen a decline in the last years on these type of services.

I don't know, to be honest.

We'll see, we'll see.

I know that there is a lot of speculation currently on Reddit.

A lot of fired people just raising voice on the forum.

So I think we will know more in the coming days also.

We'll see, we'll pay attention to it.

What is next,

A Los Angeles jury found Meta and Google liable for intentionally building addictive
social media platforms that harmed a young woman's mental health.

Awarding her 6 million in damages.

The woman, known as Kaylee, testified she started using Instagram at age 9, YouTube at 6,
encountering no age verification and was later diagnosed with anxiety, depression and body

dysmorphia.

The verdict is expected to have implications for hundreds of similar cases now winding
through US courts.

came just one day after a separate New Mexico jury also found metal liable for

Yeah, I'm happy to see this.

Bye.

I think social media, especially social media as we know it today, which is very much like
the algorithm, is very much made to make you addicted to it.

It's also, I think it's very influential to kids.

I don't think it improves kids, their self-image, their well-being.

And the challenge is that there was or is almost no regulation on that.

And what think what everybody is saying is that this is a bit like this meta YouTube
moment is a bit like we've seen in the cigarettes with the Marlboro moment where at least

we have like a formal precedent saying that this is indeed harmful.

and basically opening up for regulation.

And I think that is something to be hopeful for.

That I agree.

I think also here the the well, maybe the actual story is not like that.

It's like it's for children, right?

It's really specific for children.

Right.

And that's the main case here.

I don't think it's just saying that if it was like someone that started using YouTube and
TikTok and whatever at age 20, I don't think that would fly as much.

Right.

But I do.

But I do agree that that

I don't know.

think these things aren't designed to be very addictive.

I remember like years ago there was like whistleblower saying that there was research
within Metta about Instagram and how this was linked to mental illnesses for young

teenagers basically.

Like even showing how the addictiveness and like the body image issues and all these
different things.

Right.

So basically saying they kind of knew about it, but they didn't really act upon this.

So I do think it's good.

I do think it's, I mean, I agree with you Bart, that it's good that is a landmark right
for legal precedent to maybe regulate more.

I do think it's a, there's a.

What I don't necessarily fully agree is that only it's one, I mean, I think they carry big
part of the blame, but I also think, like you said, there maybe should be more

legislation.

Maybe there should be other things around in place, right?

Not just, I don't think it's realistic to expect just one person to take all the blame for
everything, right?

But I think there should be m like meta, meta is the only one that is to blame here, meta
in YouTube like that.

is happening here, right?

Like, like I think what everybody's hoping here is that this will open up for broader
regulation on this topic.

that I fully agree.

That I fully agree.

think maybe maybe maybe it's just was my perception on the article because you're saying
like, oh, they were fine and this and this and everyone was celebrating that it's a win.

So we really divide by it.

I think both TikTok and Snapchat in this case, they settled before it went to court.

Yeah.

yeah, I see.

No, but that I agree.

That I agree as well.

yeah.

I'm very happy to see this happen.

I think this has very far-running implications for kids.

I think it's very addictive, it's very easy to get your dopamine hits from here by
scrolling on social media.

Because you get your dopamine hit there, it's also like maybe you don't need to go out as
much, maybe you don't need to meet as many people because that's difficult, right?

Maybe it's easier to just scroll on social media.

I think it has very far-reaching implications and I'm very happy to see this playing out.

at the same time, what is also fascinating in this article is that the product, which is
in this case Meta, so Instagram, worked exactly as designed.

So Instagram made a child spend 16 hours a day on the app.

It was not really a bug.

It was the intention.

So on the other hand, I also agree with you about, I think...

And I think it's really common in technology regulation.

Something needs to happen, a really specific case needs to happen in order to get the
broader picture regulated also.

And I think that this case can maybe have some implications on regulation as a whole.

Exactly.

And I think there's also this camp that says that this doesn't need to be regulated, you
just need to have parental controls on this and it's up to the parents to control this.

But I don't agree with this.

think as a society we also have a role in this.

To me it's very much similar to cigarettes, right?

Like, if parents can control them, we also don't need any regulation on age for
cigarettes.

Like, then it's up to the parents, right?

But we all know that as a society, we're not the greatest at this, right?

And I think it's complicated for parents to also enforce that and also for platforms to
enforce.

I think the age limit threshold is 13 on meta.

I think it was written in the article.

But well, I guess that a child can just bypass this threshold.

But yeah, but I agree.

think I was thinking the same thing.

if you look at social media as a drug, right?

Then I mean, cause I think for me, like try to look at objectively, right?

But I think if you compare this tobacco to alcohol, I think it should be treated the same
way, right?

I think there should be legislation.

I think there should be controls.

I think there should be also more, more, how do say, education around it in a way, you
know?

I feel like every time you guys cigarette, there's like...

I mean, at least in Brazil, right?

And I was like, this is how the lungs of a smoker is, right?

Like, to really try to bring awareness.

I do think all these things should be there.

And maybe also, I'm kind of glad that I grew up in the time that I did because I did catch
a bit of social media, but I didn't catch it when I was really young.

And even me as an adult, I can see how addictive it is, right?

I do get very hooked.

And I think...

Yeah, you mentioned dopamine hit and it's not just like not going out.

But also if I spend a lot of time on the screen, I do feel drained.

I do feel like there's a there's a like a little quote unquote depression that kind of
like a dip, you know, like it's not just like dopamine hit and you're happy and then you

go and you do something else.

It's like I do feel like it weighs you down.

Right.

Even reading a book about about addiction actually is called like dopamine nation that
they talk about a lot of these mechanisms like in your in your brain.

Right.

So I do think it's I do think it's good.

myself for example I have Instagram account I have Facebook I have Reddit I have X but
nowadays I don't have anything on my phone I deleted everything because for me it

difficult

much time also 16 hours a day

Not 16 hours, not 16 hours.

But like you, I don't need to spend 16 hours, but I feel drained.

I feel like I wasted my, I wasted so much time and I feel drained and I don't know, it's
not a good feeling.

So, I mean, and me, as an adult, I imagine like a kid to, you know, to expect them to
self-regulate and all these things.

gonna, it's like, it's hard for me, imagine for.

And for me, it raises also a broader question, which is who should be accountable for
what?

Is it the parents?

Is it the governments for not educating enough children?

Is it the platforms?

And I think that in the tech ecosystem, these kind of questions are everywhere.

Also for data protection, who is accountable for data?

it the final?

it the final user?

Is it the data center location?

Is it the government in which the user is using?

So I think it will be answered by governments shortly, but right now these are questions.

indeed.

Maybe moving on we have

OpenAI Entropic are both racing to lock in enterprise clients through joint ventures with
private equity firms where PE firms bring their portfolio companies as customers and the

AI labs get distribution at scale.

OpenAI is offering partners guaranteed minimum return of 17.5 % while above market and is
in advanced stocks with TPG, Bank Capital, Advent International and Brookfield to raise

about 4 billion.

Entropic is pursuing its own version courting Blackstone,

and Friedman and Bermera.

A lot of names.

Though some major buyout firms have already walked away questioning whether the economics
actually work.

a lot of basically they're looking for oh maybe for dummies like me that don't know about
this what is a private equity what what are they saying here right like what does this all

mean what's the financial situation of Entropiq in my

Private equity is basically funds buying companies.

can be minority stakes, can be majority stakes in this case.

It will be of course minority stakes.

And the goal of these private equity firms, funds, is just to buy a company, make them
grow over, I don't know, five, six, seven, eight years, depends on the funds, to sell them

at the end of the...

this period.

there, basically, what he's saying in this article is that OpenAI is willing to offer
these private equity funds a 17.5 % guaranteed capital.

I've never seen that, honestly.

So basically, when a PE fund is buying a company, just...

are not making a bet, they are making an investment and the profits are uncertain.

But it's the job of the PFund to make the company grow by, I don't know, changing the
management, buying other companies.

It is called buy and build in finance or to just improve the efficiency of the operations.

There, OpenAI is just saying, okay, buy me and I will just give you yields.

which is surprising.

Can they guarantee the 17.5 %?

Can they actually say this is guaranteed?

no risk or I don't know.

Maybe to specify a little bit because there's a lot of news on this round of OpenAI.

So there is one.

big round that I understand is that they are doing a 122 billion round at a post-money
valuation of 852 billion, which they just announced a few days ago.

And next to that, there's also like this, they have the Skoll Investment Joint Venture
with a number of PE firms, which is smaller and they're like pre-moneyed valued at 10

billion.

in this vehicle and I don't know exactly how destruction was in this vehicle that they're
promising this return.

And well, you can promise everything, right?

Of course.

in this case, this is the external vehicle that will promise the 17.5%.

I think if I understood well the structure, and it will not be open AI promising 17.5%,
which...

So they don't put the balance sheet at risk.

Exactly.

And I think what is actually going on here, which actually happens a lot in large private
equity, especially US funds, is that from the moment that as a fund you invest something

in, you also bring the other portfolio companies as customers.

And what OpenAI is trying to do here, because Entropiq is also trying to court all these
private equity firms.

is saying the return is going to be so good, you basically can't say no.

So come now with all your portfolio companies and all those portfolio companies will then
become OpenAI customers.

Because we're definitely like in the last months in a race on who will become the B2B
player in the in the AI world, right?

m

on the other hand, of course, it will be a good thing, not only for OpenAI because they
will just raise funds, they will just spread, as you say, about their Gen.AI solution into

the portfolio of the large PE funds and also make enterprise adoption.

But it's also good for the funds, think, because they will integrate the newest

open AI solutions in their portfolios company, maybe increase the productivity and it's
also a good marketing as well.

And also for the underlying companies of the funds, they can use the newest open AI
solution.

it can be a virtuous circle.

Exactly, probably they are also promising that they can use it at a reduced rate or
something, so there is something in it for everybody.

Exactly.

Yeah, but people are still saying no.

But apparently like they're still, like you mentioned this is an offer you can't refuse,
it's all sweet and all these things, but apparently, well, from reading the article,

they're still having a bit of a hard time, right?

Like a lot of people are still saying no and all these different things, no.

But the thing is like, like, we're getting these these promises of what was it 17 or 19 %
return like quite late on.

and but as long as the market keeps going up, as long as valuation keeps increasing, like
all of that is fine from the moment that that stops, like, you still have these guaranteed

returns, need to be paid out.

So investors that came in earlier get squeezed out, like it becomes a bit of a like a
house of cards, right.

But I think that

that House of Cards has been building for a few years now already.

Yeah, and also this is why some P fun are just...

just backing off this deal.

think Thomas Bravo was mentioned in the article.

They are just saying, okay, this is too much for me, especially when you know that OpenAI
is just burning cash right now.

They are breakeven.

So I think that the loss last year was more than 10 billion or just less.

I don't know, but they are losing money.

The break even will be at least is forecasted in 2020-30 or 2028, something like that.

So once again, it's kind of a bet.

Yeah, and also the I saw like, mean, while ago, but I don't know if it is updated, but
like they projected to go like really low and then like in one year to really just turn

everything around, which is seems a bit from me looking from the outside and not knowing
how much these things work.

It feels a bit unrealistic, but okay.

And also feels like OpenAI is trying to find its way still, right?

Like they pulled out Sora, they deprotetized some other features.

Now they're also focusing on B2B, which is what Anthropic has been focused on from the
beginning.

So it feels a bit like they're so searching.

bit but we'll see.

see.

We'll see what happens.

what's important, I think that we will talk about that in the next episodes, Anthropic and
OpenAI are just battling and they are maybe going IPOs and they are maybe preparing some

stuff to go IPOs.

Some article mentioned one year, some other six months.

So we'll see also.

Yeah, and I think they're expecting three very big ones like Entropic, OpenAI and SpaceX
this year.

But I think like the people running OpenAI, the smart people, they're probably the best
bankers.

I think it will be a good IPO.

And we, they are talking about a one trillion IPO.

But let's come back to that later.

m Next it's on me, think.

Fivetran has transferred SQL Mesh, the open source data transformation framework it
acquired through its purchase of Tobico data.

They moved it to the Linux Foundation for community governance.

Six founding member organizations will support projects which helps data teams manage
complex SQL transformation pipelines with built-in testing, versioning and automation.

The move signals 5Trans believe that the transformation layer of the modern data stack
should evolve through open collaboration rather than corporate ownership.

um

bit about this, So, Fivetrend bought Tobico data, which had SQL Mesh, and then Fivetrend
bought dbtLabs, which has dbt.

And they're a bit competing, right?

They're two different features to the same problem, let's say.

And I think there were a lot of questions, what they're gonna do with it.

And I think after some time, we have a bit the answer, right?

They're focusing maybe on dbt, with dbt cloud, and SQL Mesh becomes an open source, they
donate it, right?

Which...

Did you mention also on the on our pre pre pre pre production chat, right?

That it's a good move from 5Trend, right?

They still gain points with open source community.

They don't need to maintain it anymore.

Yeah, well you said that they will focus on dbt and not necessarily that this is a thing
about dbt versus SQL mesh.

No, but I think they had both and they let one go.

So to me it signals implicitly that dbt is the product now, The other one is not there
anymore.

they will focus on that.

Interesting.

Why not?

To me, because that's a bit, makes it bit more complex.

Like there is like, I think last year, Fivetran and dbt kind of merged, but also not
really, but on paper they merged, right?

Fivetran also bought Tobiko, had SQL mesh.

SQL mesh is a big competitor to dbt core, which is dbt's open source toolkit for data
transformation.

And what we're seeing now, think, that they're maintaining open source packages as a
commercial company.

It's hard.

I think it costs a lot of money to do it, but it's also very tricky from a point of view
of how are you seen by the market?

Are you governing it correctly?

Are you doing it correctly?

Are you interacting with the community correctly?

All these things.

And I think what Fyfran is doing now and looking very good doing it is saying we are the
good parents of SQL Mesh and we will donate to the Linux Foundation, which also has a very

good name in the community.

And it seems a very good signal, which I agree with.

Like it's probably the best for the future of SQL Mesh.

But what Fyfran is basically doing here is saying that we don't really...

care about this specific tool, we care about the market that is using this tool.

And the market that is using this tool can actually use our platform to run it on.

But let's put the nuances of maintaining this or something like this, let's put it out in
the community.

I think that is what has happened.

in SQLMesh was already, it was still open, like it was open source still even when it was
officially under 5 trend or yeah, right?

So I think it's more like officially changing heads, right?

Like passing the head from one organization to another.

Yeah, but they still capture the market, right?

Everybody that is using this can still do it on their platform.

They're just the headache of maintaining an open source package.

They're just giving that away,

Yeah, good move from them.

And maybe, but the reason why you said you don't agree with that, maybe they're focusing
on DBTs because the merge with DBT labs is not very clear where one stops and the other

one begins.

That's why you don't fully agree with what I said before.

I think long-term, what they will focus on is just be a platform where you can host these
transformation runtimes, where you can store all your data that will become your data

lake, data warehouse, whatever, lake house, your one-stop shop for everything data
transformation and data storage.

And what is in their best interest is that there are big communities that are very much in
love with packages that you can...

run easily on their platform.

And SQL Mesh is one of them, dbt core is another one of them.

But I don't think they care necessarily about those specific tools.

They care of having the market that uses those specific tools run on their platform.

Yeah, I see what you're Interesting.

Shall we move on to the next?

Mistral AI raised $830 million in its first-ever debt financing to build a data center in
Bruyères-le-Châtel, near Paris, powered by 30,800 NVIDIA GPUs with 44 megawatts of

capacity.

Seven banks backed the transaction, including BNP Paribas, Crédit Agricole and HSBC.

with the facility expected to be operational by Q2 2026.

The company has also committed 1.4 billion to build AI infrastructure in Sweden, targeting
200 megawatts of compute capacity across Europe by 2027.

I really liked that Afael read it because he actually pronounced the names correctly.

I was just like, wow, this looks so great.

uh

do it on purpose.

Exactly, exactly.

It just happened, you see.

So, Mistral is the big EU player for foundational models, right?

So, I guess they're building a data center near Paris, which, I mean, I think, I know I
mentioned earlier that there's a lot of data center stories and like, the stories like,

everyone's talking about data centers.

I think it's good that we hear one in Europe, right?

It also mentions Sweden.

one question that I had also reading this is what's the story of this versus because Bart
when we talked to Charlotte as well from the Flemish government, she also, we talked about

the AI factories, right?

Which seems like data centers spread around and they want to be collaborative.

I guess the Mistral AI one is not part of the AI factories.

So it's not, it's really proprietary, right?

I guess.

But how does it all play, right?

Like, I think...

know.

I can imagine that they can use some EU funding for this, right?

True.

But if I remember correctly from the discussion with Charlotte, AI factories are really
supposed to be for European companies and people can kind of like, they were trying to

give subsidies as well to encourage people to use these supercomputers and all these
different things.

But this one is a bit separate from that as well, I guess, right?

Usually more for commercial, really more mystery focused and all these different things.

I don't know the exact numbers, but they have been apparently doing very well servicing
corporate needs across Europe in the last years.

Also because it's probably the only real sovereign provider in Europe.

And what they're doing now, well, I think it's a signal that they're doing very well
because they raised 830 million in debt.

I think you can only do that if you have like good...

figures to show.

And they're doing this to basically build a data center that powers 13,800 GPUs.

So you can also like, there's also like a way to think about how CapEx heavy this, these
data centers are because it's it's like, what is it?

60, 65,000 per GPU, right?

like the CDs are huge investments.

It requires much more capital requirement, these capex, and also much longer payback
periods.

So they are in direct competition with current hyperscalers that have been just invested a
lot in this kind of solution.

for me, the question is also whether the European sovereign angle is enough.

be a differentiator for justified disinvestments.

But I think it's a good news.

Yeah, but I think like also if you compare this to the opening story before, like where
it's very much VC driven, right?

Like it's really like betting for growth.

Like this is debt driven, like you're betting that there's actually going to be cashflow.

Like it sounds more mature as a company, like that there is a clearer product market fit
already or that they're just...

growing at a more reasonable rate, that there is actually a current need for that
reasonable rate in Europe.

That's what to me seems to look like.

Yeah, and this makes sense, think, because I think that data centers can provide also
predictable cash flow.

um So depth makes sense compared to equity.

think they raised some equity before, of course.

So yeah, that's also, I would say, good investment for the investor as such.

And also, it's really, I really like this news.

as it creates more data sovereignty for Europe.

yeah, Mistral is just selling right now solution where data never leaves Europe, which is
kind of, it's really important for EU companies right now.

Yeah, I fully agree.

It's good news.

one side news from Mistral as well that I heard that state of the art voice models for
Dutch is actually from Mistral as well.

So I think maybe they're also focusing a bit more on the European languages and all these
different things, which I think makes a lot of sense.

So it's also like there are a few gaps, which maybe they're not huge gaps, but there are
gaps that I Mistral is also filling.

And I think it's just good, like competition is good.

And I think bringing these things to Europe as well is also good.

So yeah, really happy to see this as well.

And the investment question is whether the argument of, data never leaves Europe is strong
enough to compete with also maybe more efficient US and Chinese models.

Fair point.

Maybe to counteract that, think, with the whole geopolitical climate that we've seen
evolve in the last two years, maybe just efficiency is no longer a strong enough argument

to choose for sovereignty.

Do you use, guys, Mistral AI?

More not really.

I should say yes, right?

m But we're not now.

think the problem is that they are typically slightly behind the state of the art.

typically, at least in the situation that we're in, currently where we're building, you
want to test what the performance of the state of the art is.

Yeah, I also think that I think from my side and what I see also from colleagues is that
we mainly use well, big use for us is generating code.

I think the best one is to cloud, right?

And everything we do, most of the everything I do is in English.

Most of thing most people do is in English.

I here I also see sometimes a bit of Dutch, a bit of French, but I think via text is okay.

So I do see Mistral a bit more on the more niche, like I said, like a colleague shared
about the voice model that is Dutch state-of-the-art.

And I think if you have something customer facing, then you need a voice bot and you have
different dialects here in Belgium, right?

And then it's a bit more niche.

Then I think there's a bigger case to use these models.

But I think for most of the things, think just staying with Claude is sufficient.

Yeah.

Should we move to the next one?

one?

Yeah.

more EU news, right?

Go for it, Marilla.

The AI note-taking app Granola secured $125 million in Series C funding led by Index
Ventures and Kleiner Perkins, reaching a $1.5 billion valuation, a six-fold increase from

its $250 million valuation less than a year ago.

The company is expanding from Mini's note-taker into a broader enterprise AI platform with
aging capabilities and newly announced public and enterprise APIs.

Customer includes Vanta, Gusto, Asana, Cursor, and Mr.

AI, bringing a total of

to 192 million.

Yeah, I think now there are unicorns.

Yeah, hitting unicorn status.

So really cool.

The granola for people that don't know.

I would describe it as a note taking tool thingy.

it pops up when you have a meeting, it takes a transcript, you can still take notes next
to it.

And at the end of the meeting, it basically enhances the notes you have with transcript.

And you can also chat with your notes as well.

It also has MCP tools, you can connect to Claude.

So if you say, Claude, what did we talk about yesterday?

You will be able to fetch.

So it's a very nice product, I would say, in the UK.

And yeah, I think, yeah.

Again, I kind of like granola, personally.

I don't know if everyone else here used it, but yeah.

What do think, Bart?

Yeah, I like it as well.

think, of course, a bit biased over building with top of mind is a bit sideways related.

But what Cronulla does very well in this age of AI is that you can capture information
very frictionlessly.

I think that's what they really excel at, like the era.

out of the way, but they do capture every day all the meetings that you have.

And it's like, takes a concern away.

Like I need to make notes, no granola is capturing it.

So you can look at them later.

And they have been highly, highly efficient at building this, at growing this.

it's actually a very impressive story as well.

The history is a bit that they raised 4.25 million.

I want to say in 2023, before they had any users, like just two months after they were
created, and then they built one year a bit in stealth mode, came out, and then basically

34 months after that initial 4.25, they're valued at 1.5 billion.

mean, it's crazy, yeah?

I think that we should also challenge these valuations.

The company was valued at 250 million, now 1.5 billion.

It would be interesting to see how they compute it also in the cap table.

True.

What I think is very, and I can't put my finger on why, but I don't see any advertisement
for them anywhere.

But everybody knows, everybody that I talk to knows about Granada.

So what I also would like to see is like how many users do they actually have, right?

I think.

And I do think actually like they're very viral and they're your user base very quickly.

But I think there's challenges to convert those free users to paid users.

And that I would be very interested in to see how well they're doing there.

Because I'm still a free user, I've been using them for months and they keep bugging me
that I need to go to pro, but they're not limiting my functionality.

Like I think they were afraid to start blocking users with the risk of actually losing
them.

Yeah, I think now, think what I noticed recently is that they're trying to add a time for
like if meeting is older than one month, then you lose.

So I think they're starting to do these things, I agree.

I think it's a very sticky thing.

It's very easy, like you said, it's very frictionless.

So it's very sticky.

So I feel like it's very easy to get started with this and it's very easy to rely on it,
to expect it.

know, like if you don't have it, then you're gonna miss it.

But I agree, like they also get, they also being very generous, like it was free for a
long time and you could do whatever.

And now they're trying to limit a bit of expiration date of the older meetings.

And every time you create a new workspace, they give you one month of pro for free.

So it's like, if you just keep changing workspaces and migrating your meeting notes once a
month, you can just use pre, you know, so I think, I think they're going to time it up,

but yeah, let's see how much, how many people we lose.

They lose along the way.

Yeah, but they also announced a public API.

So I think that also they want to move from being an end user app to being just an
infrastructure as such.

it can be also.

the play that they're doing a bit is that they will become your repository of all your
personal data, which will become very valuable, right?

Like also if you can share this with other AI assistants and kind of things, think that is
the play that they're trying to do, which makes sense.

And this is the end age.

Yeah, you mentioned maybe for another time, you mentioned tangential with top of mind.

But I'll be very curious to hear your thoughts particularly on this part, but we'll leave
it for another.

Yeah, another day.

time because I will actually have to run to my next meeting.

all right.

But then I think we stop here today.

Thanks, everyone.

Welcome, Rafael, again.

em

Eftel.

was fun having you on in the discussion and looking forward to building a bright signal.

Looking forward to it and looking forward to the other format as well.

Exactly.

think next week we will take also the Eastern break, right?

hit the ground running.

hope we need to see the practicalities, but the next time we're back, it's for an
interview.

Yes.

so.

Alright, thanks everyone, thanks Bart, thanks Rafael, thanks everyone for listening.

Ciao!

Ciao.