Claudio Agosti's thoughts on the Hacking Team

I worked at HackingTeam, my emails were leaked to WikiLeaks and I’m ok with that
by Claudio Agosti

Is radical transparency the best solution to expose injustice in this technocratic world, a world that is changing faster than law can keep up with?

That question became even more relevant to me, a privacy activist, when I found myself in the Wikileaks archive, because I worked at Hacking Team 9 years ago.

The break-in at the Italian malware-producing company has been a unique event in exposing every detail of how these types of companies operate.

What followed was an effort by the international Internet community to review what that means. Privacy activists and hackers were able to dig in the enormous amount of information released and to find proof that Hacking Team was serving dictatorships.

This is a leak in the public interest, and I really feel that the personal and corporate damage is smaller than the improvement our society can gain from it. But to reach such an improvement, we have to focus on the bigger picture rather than getting distracted by the juicy details.

First, let me describe my working history and personal involvement in initiatives.

In 2006 I worked for Hacking Team. I was already a privacy activist, and my only duty in HT was consulting private Italian companies by reviewing their network security (penetration testing). Nothing to do with RCS, malware, trojans, offensive security or the like.

Please don’t mistake me for a whistleblower: I’m not going to speak about something I worked on, because I resigned from the company long before malware became the core business model of Hacking Team.

As a digital human rights defender with my background, I realized that the discussion around Hacking team is missing context and wanted to give my opinion on two topics that were raised over the last week. The first being the fundamental implications and possible abuse of power that come with the use of this technology in democratic societies. The second is the race to the bottom, where the economic value of Internet security is more important then the actual safety and security of the users and the Internet’s critical infrastructure…

This is enough to clarify my personal situation about HT, so let’s switch back to my educated opinion about what’s going on.
Public forces using secret weapons

Let’s start with the scariest implications of these technologies. This week, both technical experts and others have discerned and discussed three things: evidence planters, kill switches and backdoors.

If even one of these elements is present, nobody should trust malware as a tool of the democratic state. Many claims have been made about the presence of such hidden features, but let’s see in details what they mean.

An evidence planter can be used to implant evidence and remove traces on a victims’ device in order to fabricate evidence to be used in court. This should be the nightmare of every state which operates within the rule of law.

A kill switch gives an agent the ability to shut down a setup made for a customer. While we do not know for sure that HackingTeam has and uses them, this technology is mentioned in a number of documents as a crisis procedure. This means that if a customer violates their license (however this is defined), HackingTeam can interrupt the service. And if we do not know whether or not this is used, the malware customers (e.g. states) certainly do not know either. Imagine the consequences of a private company having more control over software and data that the state uses than the state itself.

A backdoor is a technical modification of the software. When a certain condition matches, the software works differently. This means that HackingTeam developers can potentially benefit from this knowledge in many cases. For example, if they are being monitored by their own malware, they can disable the data collection. Considering the command and control infrastructure that they were replicating for every customer, it would be possible to disable or to get access to customers’ — such as the DEA or Mexico’s secret services — investigations.

Backdoors can be built in two ways: the simplest is matching the condition and changing the behavior. However, if someone analyses the code, they can discover and understand it and use it for their own purposes.

If such a backdoor exists, it will be discovered, now that the code is leaked and under review by Internet users.

The second, and stealthier way to create a backdoor is called “bugdoor”. Is nearly invisible beside deep code audit and software testing.

The developer must be willing to weaken the code, sneaking in a vulnerability that can be used only with a deep knowledge of offensive techniques and the malware itself.

Hacking Team developers have both.

Bugdoors can be spotted too, through a deep security review and software testing procedure.

The question we have to pose is: Do we want to live in a society where a single developer can abuse such possibilities?

My personal position on the subject

I’m in favour of banning every kind of market which supplies tools that are intended to compromise people’s data. Needless to say, that would be a complete utopia because there is always at least one entity in the current software production chain that you cannot trust. o They have their own agendas and twisted intentions that benefit them — and harm others.
Stating that ‘state malware’ has to be forbidden is like being against Internet surveillance: I agree! Who would not be against? But it’s unavoidable and wehave to defend ourselves preemptively.

Every state should ensure its citizens safety and not exploit technological weaknesses. This behavior, in the long term, will otherwise split the population between who is technically capable and who is not. This is technocracy.

This ban would be ignored by the military and intelligence agencies because it would be a civil regulation, while private companies would remain as the only suppliers.

The only reasonable compromise is heavy regulation on when and where such powerful weapons can be used. Unfortunately this gets influenced by political dynamics.

Last but not least, creating a third party reviewer (some national authority?) that can ensure the absence of active operations (such as file writings, interface manipulation, network activities) and sign cryptographically the malware code.

This has to be a separate group of technicians who vouch for the integrity of the customer, check the technical features of the company product and ensure that no backdoors, kill switches or evidence tampering functionalities are present.

But who can do that, considering that the Italian state was supporting the export to Sudan, was amongst the top buyers and was supporting HackingTeam for investigations over former employees? It is nearly impossible now to trust a state.
The press release and narrative adopted

The second point that I need to provide context for comes from the official HackingTeam press release. The way they framed the exposure was an astonishing overestimation of their products. The effective danger of releasing malware code into the wild is lower than what HackingTeam is claiming. Headlines stating the danger of the leak have been repeated by international media and have overshadowed the most important fact: this is a weapon for transparency. Citizens, now aware, can pressure for proper regulation. The leak is not a weapon in the hands of criminals, because the only value of the weapon is secrecy.

In the underground market of digital weapons, when something secret goes public, it loses its value straight away and takes the adjective “burned”.

Secrecy of the attack techniques, like the one that was abusing a Flash vulnerability, is now burned.

Hacking Team has invested high-paid expertise in finding ways to obscure their malware from antivirus software. All of those investments are burned too.

Other malware producers are using a similar infection strategy and now this is weakened too. But this is very good because a lot of espionage attacks exploit the same kind of network injection, so is another point of awareness for users, developer and researchers.

With all these resources burned, it is no longer true that this technology can represent a danger for society.

To be effective, this technology has to be a chain of secrets, fresh intelligence about the target, and the attackers need to be experienced. The fear-mongering press release is just propaganda which deviates public attention from the public good of this leak.
Massive leaks mixes meaningful stories in a lot of personal stories

Radical transparency, the unrevised publication of everything on a subject, is essential in this phase of exponentially increasing digital power shifts. Until we improve our laws.

Radical transparency is not holy, because personal information that has nothing to do with the societal concerns now are exposed forever and people like me can just live with that. But people inside the system should consider this power switch and ■■■■ the whistle before the next damage happens.
So, if you are working in an ambiguous unregulated business…

Well, you, have to ■■■■ the whistle, before someone else blows it for you, without any concern for your life.

At least the personal damage can be contained and the problem will be addressed. Be sure that otherwise, Radical Transparency will hit your life sooner or later, in this age of bulimic data collection.

When that happens, the last thing you will want is a reason for Internet dwellers to legitimately dig in your stuff. And if you are part of an ambiguous unregulated business, this legitimacy is quite automatic.

Thank you, Claudio. :slight_smile: