- Perspectives on Power Platform
- Posts
- Trust issues: stars, badges, and malware
Trust issues: stars, badges, and malware
How social signals in free tools are exploited by malicious actors, undermining community-driven solutions.
Tools provided by the community are an essential ingredient of a thriving low-code application platform ecosystem. Most professionals who work in developing Power Platform solutions are surely using at least some non-commercial tools that help them be more productive than the product features offered by Microsoft would allow.
What this means is we are using software code provided by a third party, most often without paying anything for it. The financial barrier is therefore nonexistent, allowing popular tools to quickly gain a sizeable audience. The success of open-source projects today is a testament to how powerful the network effects of this model can be.
You know who else also enjoys a powerful distribution mechanism for their software? The bad guys. The ones who aim to make money from delivering software for free, by grabbing the users’ data, identity, and other valuables. It’s an unfortunate reality of life that the more popular something becomes, the more it attracts attention from the kind of people you’d want to stay as far away from you as possible.
Today I’ll discuss two recent, cautionary examples of how the social signals we tend to rely on may be failing to guide the community towards trustworthy tools.
Waiting for a Star to Fall
Determining the reputation of open-source tools relies on activities of other community members. When you’re not developing commercial software, there’s not going to be a marketing budget to promote the tools. The digital word of mouth takes many forms, with the most easily measurable and observable one being the “likes”. In the case of GitHub projects, this means stars given to the repos.
The stars are not merely a vanity metric. They clearly drive the behavior of other users, like you and me, who are seeking confirmation that the tool we’re looking at has been adopted by someone else, too. This is an important signal, especially if we don’t have any prior connections with people either developing or evangelizing the tool. The same phenomena as when we’re shopping for products on Amazon or any other online store. We tend to trust what the users say, more than what the seller/maker/provider of the product.
It’s not surprising then that the fake reviews problem that plagues online commerce is present also on open-source software platforms. Recently a study was published that discovered over 3.1 million fake stars on GitHub projects used to boost rankings. While the scale of such fraudulent activity is certainly newsworthy, the part that especially caught my eye was the price list for these fake starring services:
I started to think how many stars I would expect to see on the repo of a new Power Platform tool to consider it “community approved”. Many of the credible repos from MVPs and other community activists on the Power Platform Open-Source Hub have less than a hundred stars given to them. Only a fraction of users who download and use the tools are also starring them on GitHub.
This makes the open-source tool ecosystem an easy target for those looking to exploit it. Imagine you're attempting to build a repo for distributing malware and want to boost its popularity. The cost of purchasing a few hundred fake stars is so negligible that it would be stupid for these malicious actors to not use them.
The full study is available here: 4.5 Million (Suspected) Fake Stars in GitHub: A Growing Spiral of Popularity Contests, Scams, and Malware. It begins with a quote of the Campbell’s law that serves as an important reminder of the underlying social science principles. Because it was written 30 years before GitHub was even founded:
“The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
Systems that have been built to crowdsource the evaluation of products or content can initially offer an efficient way for ranking, moderation, and other key services a digital platform needs. In the long run, the potential reward for gaming such systems through automated activity grows higher and thus the reliability of “likes” is reduced. We can only imagine what the massive push for automated AI agents is going to do to such platforms soon…
Should we then gravitate towards a centrally governed model where an authoritative party gives out these stars? While it could reduce the attack surface for automated campaigns, there are other kinds of problems with software endorsements provided by the platform operator.
Promoted malware in your browser
In addition to GitHub, another delivery channel for free software is to package them as browser extensions. From a usage perspective, this is a very convenient model. From a security perspective, I don’t think people in general realize all the risks involved.
With a couple of clicks in the extension store of your browser provider, you can install and authorize the software to not just run on your machine. Browsers like Chrome will automatically sync them to any PC where you log in with the same identity in the future. Also, just like in other modern app stores, updates will get automatically pushed to users.
Naturally, such a great channel is a lucrative playground for cybercrime. Even if you can “only” gain access to what is done on the browser, that covers probably 99% of user activities with a financial opportunity for attackers to tap into. Previously I wrote about the Karma connection malware campaign that targeted browser extensions, including the popular Dynamics 365 Power Pane Chrome extension with 70k active users:
A much wider campaign became public during the Xmas break. On Christmas Eve, an employee of the security vendor Cyberhaven clicked on the link of a phishing email. What looked like an email from Google, sent to Chrome extension developers, turned out to be a trap designed to give attackers access rights to the code behind the extension. Soon after, new malicious code was pushed to the 400,000 Cyberhaven users.
This campaign had many other victims, too. At least 36 extensions have been identified to contain the same malicious code, putting the total user base to 2.6 million by now. Apparently, this particular attacker was interested in hijacking Facebook business accounts, using a script that looks for QR codes related to MFA and CAPTCHA mechanisms. If you ever wonder how people lose access to their FB identities despite multi-factor authentication, this is one way it’s done.
I looked at the list of infected extensions and browsed for a few examples that remained online in Chrome Web Store. This one example of YesCaptcha assistant still had 200k users at the time of writing. More importantly, it was listed as a Featured extension by Google. Now, even though the average rating wasn’t all that high, the social indicators here could certainly lead the normal Chrome user to believe that installing this extension to their browser profile was not a risky decision.
Okay, what exactly is needed to claim the Featured badge? Us users don’t normally bother to click into boring docs about these things. When I did, it becomes obvious that everything there is written for the extension developers:
Featured extensions follow our technical best practices and meet a high standard of user experience and design. Before it receives a Featured badge, the Chrome Web Store team must review each extension. The team checks for adherence to CWS best practices, an intuitive user experience, and use of the latest platform APIs, among other things.
The best practices list looks nice. It talks about reassuring sounding topics like compliance, security, privacy. The documentation says that the Chrome team members manually evaluate each extension before it receives the Featured badge. The intentions surely are good behind this initiative.
Let’s pause here for a moment, though. As with the GitHub stars, wouldn’t the developers of malicious browser extensions have a big incentive to follow the process to achieve signals like Featured badges for their software? Just because you can’t purchase it like the stars, doesn’t it make perfect sense to first follow the rules, gain the one-off evaluation badge, and then update the code to activate the sinister part of the master plan?
The problem with validating the ever-growing pool of published code in the software stores is not unique to Google. For instance, Microsoft operates the Visual Studio Code Marketplace that is increasingly becoming a hosting ground for malicious VS Code extensions targeting software developers.
If the big corporations like Google or MS are not able to keep up with updates to these extensions, what hope do us normal users have?
The inconvenient truth about trusting code
It’s easy to blame the victims and say, “you shouldn’t be running just any random code found from the internet”. Sure, everyone should take a moment to think before they click on links in surprising emails, or install apps/extensions. The problem is, how could even knowledgeable users draw a line on what is and isn’t suspicious these days?
So many tools that we use are “free to use” - in one way or another. The spectrum of legit tools ranges from truly free open-source projects to software products/services with some indirect monetization models. From harmless ads to questionable data collection to downright illegal activities, at some point on that spectrum we’re expected to realize not to proceed any further. In reality, it may be the evergreen software that continues to change without the existing user knowing about what’s happening.
You are not safe from malware, even when you are paying for the software in use. The Cyberhaven incident is a great example of the risks since that particular software product was built and operated by a cybersecurity company. This highlights the grim logic behind such campaigns: the more trustworthy the vendor is in the eyes of users, the higher the potential value is for compromising their software.
In the end, we can’t remove the element of trust from the equation and replace it with some policy or technology. There’s was an excellent, in-depth article published about this dilemma a few days ago on the Educated Guesswork blog:
The full article is well worth a read, but it all really boils down to this:
“Not only is there no meaningful way to determine what software is running on a given device without trusting the device, even when you download the software yourself, verifying that it's not malicious is extraordinarily difficult in practice and mostly you just end up trusting the vendor anyway.”
There’s a multitude of technical ways how individual steps in the software supply chain could in theory be made more secure. In practice, even if there were resources to scale such checks to cover the typical software portfolio of a user today, it would involve trusting a small army of individuals to each perform their tasks flawlessly. In the real world, we mostly just have to rely on a network of trusted parties to collectively provide us with sufficient coverage from cybersecurity threats.
Know your enemies
As some of you may know, I don’t have a software developer background. Without the ability to write traditional programming code, I could hardly spot vulnerabilities or malicious elements in the source code of any tool. So, am I useless when it comes to protecting businesses from information security risks?
I refuse to think so. In the same way as I’ve worked professionally in the Microsoft business applications field for two decades, there are many layers to software security that are not about the underlying code. So much of our struggles revolve around the choices in how to use the available technology - not in manufacturing more of it. Guiding people towards better, more educated choices in what to use where, and why, is the layer where I feel I can have the biggest positive impact. I believe it all starts with spreading awareness.
It must have been 2023 when I first became consciously aware of OWASP myself. Sure, I had run into the acronym earlier, but it didn’t have much relevance to what I was doing. It wasn’t until the emergence of OWASP Low-Code/No-Code Top 10 project that I realized there was something in it for me. These security risks were directly related to my area of expertise, meaning low-code technologies such as Microsoft Power Platform. They resonated with me because so few people around me were talking about them. There was obviously work to be done.
Next week, I’ll be doing my first public contribution to the OWASP community by presenting in the LC/NC security meetup. My talk “Power Tools & Power Malware” will focus on the topics discussed in this post. Ziv Daniel Hagbi from Zenity will present his findings on the gaps in using Power Platform security groups to secure your environments.
Reply