Trusting big tech in the age of AI

What does it mean to be a responsible AI citizen in a world that's just getting crazier every day?

When I recently attended the Cloud Technology Townhall Tallinn conference (CTTT 2025), it was no surprise that an event with Microsoft technology focus had plenty of AI content on its agenda. What did stand out, though, was how many of these sessions covered not just AI’s potential but also its risks.

I was delighted to see how the discussion our community has around the impact of artificial intelligence is moving towards the harder, more important questions. We’re no longer simply amazed by the party tricks LLMs can perform. We are reaching the state where people even inside the tech industry are rightfully questioning “is it okay to use AI in this way?”

Since the generative AI models of today are already highly flexible, general-purpose tools that don’t depend on the predefined features developed by software vendors, the risks have begun to materialize in the world around us. Applying technology without worrying about the consequences is never okay.

Yet it didn’t use to be such a dramatic issue back when many of us in the MS BizApps ecosystem were merely building internal business apps. Sure, the UX often sucked, the right data wasn’t available at the right time, automations weren’t running reliably. But it wasn’t a life & death issue.

Today, the same AI technology we have access to via the MS cloud is also visibly shaping the everyday world around us. People of all ages and background will encounter it and be impacted by systems that take advantage of its powers. Not just your colleagues, but your children and parents, too.

Very few people will have the kind of understanding that we as MS technology professionals do. Heck, we don’t even need to update our LinkedIn profiles with a “Director of AI” title to become experts in the field - because it’s all relative to what you do and where in the realm of tech work.

“Chris & Yannick bond over AI in a fireside chat” session at CTTT 2025.

Case in point: during their AI fireside chat session at CTTT 2025, Yannick Reekmans and Chris Huntingford ended up showing my LinkedIn profile on screen as an example of an AI thought leader. More precisely, as someone who exhibits critical thinking rather than just boosting the official technology vendor gospel.

Do I pretend to have education or experience on the science and technology behind AI? Nope. Do I build cutting edge AI solutions for business customers? Also no. Sheesh... Okay, one last attempt. Do I think about the phenomena I see playing out in the field of AI & IT and express the thoughts I formulate on it based on this work? Yes, I do!

This, ladies & gentlemen, is one possible way to strive for responsible behavior in one’s own field of expertise. AI isn’t a model or a service, any more than the internet is about cables and routers. You don’t need to be in the business of AI research or cable manufacturing to have your world shaped by AI / the internet.

Back when the internet was becoming mainstream, you didn’t need Cisco to instruct you how to do your specific business in the connected world. Similarly, you don’t need Microsoft to tell you what the right way is to make use of AI. You can listen to their pitch and the suggested ways to apply their products, but the rest is up to you.

You don’t become an expert in all of this by just listening to the vendors envision a better world operated by AI agents. When it comes to the practical applications of AI, we all need to own it. How do we do that then? I believe it includes A) formulating your own opinions, B) making active choices instead of being passive targets, and C) taking a stand for what your personal values say is the right thing. That last part is the hardest yet most important one. It’s what I’m writing about here today.

Our digital toolkits aren’t isolated from society in which they get built and used in. I don’t want to get into a deeper political discussion on this newsletter. I do feel, though, that it’s beneficial to explain my perspective on the state of the world though a few personal examples. These are deeply connected to element C above, the part about your personal values.

How I became an evangelist for U.S. big tech

I have been working in the Microsoft ecosystem for around two decades now. The story of how I ended up in here begins with the search for a new CRM system for a small, semi non-profit org in Finland. In 2005, it was time for them to find a more scalable solution to replace a Lotus Notes/Domino based customer data management system. The choice ended up being Microsoft Dynamics 3.0. The rest is history, written on my virtual CV.

Before this, I was the typical nerd who didn’t exactly root for MS - rather it was often seen as The Dark Side of tech. I had been using DOS and Windows based PCs for over ten years already before starting to do it professionally. Wintel was the dominant force in the market, yet the users weren’t necessarily choosing it out of passion for PCs. It was just the most logical choice, the path of least resistance. As for Microsoft, the company had been in legal hot waters for its monopolistic practices. Hating on M$FT was as trendy as ever. For perspective, Paul Graham wrote his “Microsoft is dead” article in April 2007.

With the global coverage and market dominance on the scale of MS comes something universally good, though: a broad user base serving as the basis for communities to emerge. As I began to dive deeper into MS technology in my professional journey, focusing on BizApps like CRM but also touching more mainstream products like SharePoint, I began to understand the upside in it for us individuals.

Sure, the “better together” story of MS product marketing was often not true on a detailed technical level. Yet it was 100% true when applied to the ecosystem. Us users, admins, consultants etc. - we were better together. Better than what could have been achieved with local products and best-of-breed point solutions. Combined with the rise of social media and the Web 2.0 era, the global community quickly began to form in a decentralized manner over Twitter hashtags and other digital grassroots phenomena.

That was My Thing, more than anything that came before. It encouraged me to cultivate the “working out loud” mentality that I had become a fan of when observing Enterprise Social technology challenging the traditional channels and power structures inside organizations. By sharing my thoughts and observations outside the corporate firewall, I formed connections with likeminded people around the world, working with this same technology. My blogging habit led to an 11-year service in the Microsoft MVP Award program.

It felt like I was fighting for a worthy cause. Along the way, Microsoft also changed from the closed Ballmer era beast into a more collaborative Nadella era giant. Everything was still big, but not in an evil, abusive way. On many fronts MS was no longer trying to defend its market position and block all the competition. This was now the situation since they had lost the battle on so many fronts to other tech giants (search to Google, mobile to Apple, social to Facebook, cloud CRM to Salesforce etc.).

Being a challenger that aims to change the game is, to me, the role I’ve always identified closely with. Growth in business alone is not sufficiently motivating for me to consider it a success. It needs to be the result from doing something that I see as a just cause:

I’ve derived the core of my own “Why” from a common theme across different working roles I’ve had. It wasn’t a planned journey, rather it has emerged as a direction that can only be truly observed in retrospect. I can’t pin it down to a single phrase exactly, but it has a lot to do with the idea of democratizing the tools of creation. Making it possible for those with the need, the idea, the passion, and an open mind to forge ahead without needing to beg for support from others.

One such example has been the low-code movement that I’ve evangelized quite a bit during the past few years:

Why did I feel like it was a great idea to bet everything on the Power Platform a few years ago? Why was Microsoft the right ecosystem for this? Because their core strength from the early days of the PC has been to put something on every desk. Mainstreaming a technology is an important step in democratizing it, after all. MS Office was not just the distribution mechanism for Power Apps/Automate but also as the anchoring persona of a power user gaining access to these citizen developer tools. That has been instrumental in making Power Platform the impactful product family that it is today.

I’ve had good reasons to be the evangelist for this change. I still am mostly comfortable with the Microsoft ecosystem today when it comes to “real” products that can today be applied to solve actual customer problems. When it comes to AI, well… Subscribers of this newsletter will have seen that I consider Copilot to be a solution in search of a problem.

I speak up but I don’t give up, because I believe things can change for the better still. I don’t think Microsoft has lost its way just yet. This is all part of a larger industry phenomenon that no major player can protect themselves from. Some, like Apple, are playing it safe while still participating in the AI game (and facing deserved ridicule from iPhone features promoting hallucinated slop).

Companies like Microsoft that have chosen to put many of their eggs in the OpenAI basket now have to act like they know exactly what’s going to happen once AI becomes as mainstream as Office tools. To the extent of rebranding their Office tools to AI tools and gently forcing consumer customers to use them.

I personally don’t see the proper Just Cause there - nor for GenAI in general. This is not because it couldn’t be impactful, but rather precisely because the breadth of AI’s impact to our society is not being addressed openly and honestly. It increasingly feels like we individuals are, once again, on the receiving end of novel technology. It’s serving us nowhere nearly to the extent that it’s serving venture capital right now.

Today, AI is the tool of big tech and big money primarily. Rather than a tool for every one of us, which in turn would make it a worthy business (the traditional way to make money). The most frightening part of it is how it is turning into a tool for big power, too. Meaning, governments, rulers, oligarchs.

The dividing forces at play

Having reflected on my story of what led me to this place in the Microsoft business applications world, how has the world in general changed while I was busy doing my own little thing inside the MS ecosystem?

Today it’s 2025 and we as individuals have all the tools in place to achieve something that would have been impossible in 2005. But are we “better together” now in a way that takes advantage of the innovations from two decades of work that led us here? My subjective conclusion is no, we aren’t.

Technically, people have more means to get their message published out into the world. At the same time, it is objectively true that the visibility of these messages is controlled by central algorithms more than ever before. The digital world seems open, compared to the past era of classic media gatekeepers deciding what gets published. But who decides what gets seen? That’s the real question to ask.

The open web 2.0 era gradually turned into the age of walled gardens as smartphones took over. Users began choosing the convenience of central platforms from the likes of Meta and Google over web communities that were more fragmented. Powered by ad-funded business models, free visibility of content became a bug to fix, rather than a feature to embrace. Social media like Twitter, LinkedIn, Facebook all began to downrank links that lead outside the platform, to keep users scrolling and seeing more ads. “For you” became the default feed experience, rather than the people you had connected with or followed.

Big American tech ended up controlling every non-physical element in the global communications chain. OS, app stores, platforms, users, data - especially the data. As for the objections from the world outside the U.S. borders, primarily from the European Union, arguing that the user data was not freely available to be collected wholesale and combined across sources to build ever more powerful algorithms. Yeah, those arguments were not exactly welcomed by the tech giants in charge. “Who do those pesky European officials think they are, with their GDPR and fines imposed on our glorious big tech?”

Then along came TikTok. Suddenly it wasn’t okay for a non-U.S. company to play by the same rules. Well now. This presented a bit of a dilemma since regulating all apps with similar practices would have created major business blockers to the likes of Meta. So, a political theater play was orchestrated by the 45th POTUS, who recently returned as nr. 47. He still had the same mess on his hands, originally initiated with his executive order.

At the time of writing this, it’s unclear whether Microsoft could end up becoming the new owner of TikTok. I’ve never used the app myself so this wouldn’t make any difference to me immediately. Down the road, I’m not sure if such a move would make MS a better or worse company as a result. At the same time, I can’t think of any corporation that I’d be happy to see become the owner of TikTok. Because what’s ultimately happening here is not cool at all.

While the U.S. public has been lured to watch and share their reactions on what will happen to their favorite time sink app, all in the name of cybersecurity, a whole different kind of cyber operation is in action. By first spending $44B on buying a social media platform and then apparently using “only” $277M to buy access to the office of the current president, Elon Musk has reportedly gained full read/write admin rights to much of the data the U.S. government has on its citizens and organizations:

Expert sources are saying this is a level of insider threat that has never been witnessed before. Even when putting all questions aside on what the state of the democratic system is in the United State at this time, the fact that gaining access to these government systems was as easy as walking into Twitter HQ shouting “let that sink in” is just… Well, I don’t actually have the right words to describe it. I keep reading and sharing how AI is being used on government and citizen data, no questions asked. I didn’t expect it could be so simple for wealthy individuals to break every rule in broad daylight - in the country that controls almost all of big tech globally.

I don’t want to attach here any of the million meme posts I’ve seen over the past week about this. It’s such a devastating blow to the credibility of a nation that holds the keys to most of our devices and cloud data. If there ever was any doubt that strict regulation, local app & data storage alternatives, and backup plans are needed for the allied nations - those have mostly been swept away now. Politicians in European governments just aren’t yet ready to state it out in the open, for the most part, as they’d not benefit from the concern that it would create among citizens.

The shifting default assumptions

Even before Musk’s DOGE team members (that included known hackers) plugged in their servers and compromised U.S. government data, there was an event that made me realize the shift in my level of trust towards household names in big tech and AI. You’ve all heard of DeepSeek by now, and very likely know that it was developed in China. Whether it was a planned publicity operation or not, it most definitely turned into a “TikTok 2.0” moment.

As the R1 model became available for users around the world to try out, it presented a new kind of situation for us in Europe. Or maybe I should talk about us in Finland, since I don’t want to overly generalize my observations here. To give just the shortest possible context to this: 1) we are a nation of 5.6M people in the Nordics, 2) we share a 1,340 km border with Russia, 3) we finally joined NATO when Putin was looking elsewhere (i.e. invading Ukraine).

Throughout our 100+ years of independence, Finland has aimed to position itself as part of the West. Partially this is of course a strategy to build mental and political defenses against the risks of what’s right behind our eastern border. But there’s more to it, from a national identity perspective that relates to shared values. Personally, I feel that up until around a decade ago, there was a tendency for many of us to “default to the US option” in many areas of life.

In information technology, it has been a natural default - more than in any other industry most likely. When tech evolved into also covering our data, through the course of cloud computing, and media via the rise of social and fall of traditional formats - there was observable hesitation. “Is this dependency a smart direction? Well, there aren’t any quick solutions to it, so let’s just go with the US companies for now.” I think it all happened too fast for society at large to have much say in the path chosen.

Now comes AI then. “Would you like your usual default option, Sir?” This time the starting point is a different reality than when we all signed up for Facebook. Quite different indeed - at least for those who are following the tech side of things. But that’s just the mental side. If you don’t see the possibility to change the default to anything else, you’re unlikely to take action.

Up until now, most of the dystopian examples of what surveillance technology and AI tools like facial recognition can be used for has been coming to us from China. Not because it has been any more advanced or deeper than what the NSA, Meta et al. are capable of. Rather the story has been shown in a way that attacks the vocal individuals and oppressed minorities in China. It’s scarier than US actors doing it in bulk, to everyone, because I guess that’s how the human psyche works when it comes to rating the risks we observe.

With DeepSeek, it was the first time I ever recall having these two players presented on a playing field so equally. I got to ask myself “which tech supplier side do I consider more evil?” And I honestly couldn’t see my brain making a default choice in this matter. It’s as if I had found myself in the middle of this scene from The Office:

In online discussions, I encountered fellow Finns warning people about the risks of using any Chinese technology. In the example below (translated), I had to think for a moment “is the guy posting it as a sarcastic argument?” Because to me it looked like way too perfect an example of what the Chinese and US technology have in common - not the difference between them.

LinkedIn discussion (originally in Finnish) around the risks of using DeepSeek.

He never replied to my comment, so I won’t know his true sentiment. And that’s not important now. I don’t have the illusion that the public opinion on risks behind foreign technology from different superpowers would have radically shifted yet. What I’m simply saying is that it was a startling moment I observed in my own thinking. I didn’t even need to have my Snark 9000 mode activated to state that “they’re the same picture” when it came to choosing an LLM from China or the US.

Thinking about trust, the topic of this long text, that’s how things tend to go in life. You trust people, brands, and other entities because of something they’ve demonstrated. They don’t need to be continuously perfect to remain trustworthy. You don’t need to agree with everything, you just maintain the general level of “trust by default”. It’s not a matter of blind trust at all, rather you merely choose to not evaluate each and every action from the other party on the same level as you’d need to do with a complete stranger. A new brand, a new service provider, and so on.

On an everyday level, we simply couldn’t spend enough of our energy to operate in a world of “zero trust”, doubting everything around us. Still, trust is not forever. We do re-evaluate our levels of trust as background process of sort. When enough suspicious activities get flagged and we check the logs to reflect on the big picture, the trust may get revoked.

That’s how it’s got to be. We need to remind ourselves about the need for critical thinking when the circumstances change. In his CTTT 2025 closing keynote, Chris Huntingford talked about “defining the default for the next generation”. What it means to be a responsible AI citizen. I believe it ties very much into these moments where a considerable change is taking place (mass adoption of AI). When we’re presented with the option of following along the path that those parties we’ve previously considered trustworthy are asking as to pick - as if by default.

“Defining the default of the Next Generation in the AI-verse” - CTTT 2025 closing keynote by Chris Huntingford

That path may not be wrong in the end. But are you acting responsibly if you’re choosing merely the path of least resistance in these times? Taking the easy way out, looking away from the problems and hoping that things will work out. Yes, that would certainly conserve a lot of energy. Something that may well be in short supply when you are surrounded by other stressful factors that demand your attention.

Do keep in mind that there is a choice for us all. Even accepting the familiar defaults is a choice. It’s hard to blame anyone for sticking to them. Me, I’ll try to act like what Chris urged us to do in his closing slide: Vocally challenge and do not accept nonsense.

Yeah, about trusting those AI models…

After I had written all the text above, I uploaded the document to ChatGPT 4o and asked it to check the grammar and clarity before I post the live newsletter. I got several nice-looking suggestions for sentences to be reworded. I went to look for them and… none of them existed in the document.

ChatGPT caught cheating, by making up suggestions without reading the file.

I’ve done the same routine many times, yet this was the first time I noticed it. Or should I say, I caught ChatGPT cheating on a regular task. In this age of AI, it seems even our tools have become too lazy to read the full contents of the prompt. They try to wing it by giving a plausible answer.

Which, of course, is the only thing ChatGPT ever did to begin with. LLMs will always hallucinate, because that’s how their answers are technically produced. Knowledge and thinking do not exist here, we just ask the machine to pretend that it does. In our responsible scenarios where we all use AI today.

“Why didn’t you just ask M365 Copilot instead?” I tested that, too. It returned fragments from actual text that was in the document. The problem there was that the grammar mistakes it pointed out did not exist.

Reply

or to participate.