Zoom was the center of attention earlier this year. On one hand, this was due to millions of people suddenly needing to work from home. On the other hand, this was also because of the various vulnerabilities that hackers (some of them wearing white hats, others wearing black hats) have uncovered. Of course, the latter cause is not entirely independent from the former: people tend to shoot at things flying high.
Zoom appeared to be a perfect target. For years, their top priority was user experience, and this attracted many users. Focusing solely on functionality might have some immediate benefits; however, neglecting everything else is always a mistake in the long run. A system is not ready "simply when it works."
Zoom's story is proof that ignoring security during development is a debt one will undoubtedly have to pay off sooner or later. And there will be a tremendous interest rate on it. Zoom's developers made so many mistakes (some basic and complex) that finding a misstep was super easy. Once a service gains attention, these problems will pop up on a mass scale, jeopardizing any previous success.
So, let's look at the notable mistakes and see what we can learn from them from a software security standpoint.
The first piece of user-friendly functionality (with negative security implications) was related to hyperlinks. Zoom turned all URLs into clickable links, which can already send an inattentive user to a malicious website. But Zoom also turned UNC links into hyperlinks (those are the famous "addresses" starting with Windows's two backslashes to identify network resources). Windows uses the SMB protocol to open these; when a link is clicked, it sends the logged-in Windows user's username and password to the request's target. Of course, the password is hashed, but somebody with the right tools and skills could still crack this password in a reasonable amount of time.
Zoom turned addresses to clickable links and introduced an even bigger problem: a remote code execution possibility. On Windows, UNC can also point to an executable. To be fair, when clicking on such links, Windows will prompt the user if it really wants to execute the pointed file. However, this protection is only used if the file is on the network (intranet); for local files (e.g. \\?\C:\Windows\System32\notepad.exe), the file will be opened completely silently.
Both pose a medium risk, as the user needs to click on a link to trigger the attack. Considering users' security (lack of) awareness in general, this is not completely unlikely, and the consequences may be severe.
Cryptography is a feature with direct security implications. In this domain, we have both misleading communication from Zoom and some serious mistakes made by their developers.
Zoom claimed to apply end-to-end encryption in its documentation. That, however, turned out not to be entirely accurate. They created a secret key that meeting participants' client software could use to encrypt and decrypt the video and audio stream. However, since Zoom manages both the stream and this key, they could decrypt the stream of any meeting at any time; exactly what people who assume end-to-end encryption is in place would not expect to have happen.
Yet again, the situation was even worse.
Without going into more details on block ciphers and their modes of operation, Zoom implemented their own encryption protocol on top of the Real-time Transport Protocol (RTP). This is a serious mistake on its own: one should never roll their own cryptography! In doing so, they used the Electronic Codebook (ECB) mode of operation, which is vulnerable in various ways and has been outdated for quite some time. The Secure RTP (SRTP) standard prescribes using up-to-date modes of operation, like a variant of Counter (CTR) and OFB modes. But they did not use that.
Since they also compressed the communication, the cryptographic attacks were far from trivial. Again, with enough resources, they were not at all impossible, and that is the point.
Finally, there was yet another trivial logical error in the protocol with serious consequences. For efficiency reasons (remember user experience is the focus!) Zoom was sharing the above-mentioned secret key, even to users in the waiting room. This meant that unauthorized people could access the audio and video stream of an ongoing meeting even when not admitted to the meeting.
Let's take a look at a weakness that has even been named after Zoom. As an analogy to photobombing, when somebody appears on a picture, essentially "hijacking" it, one can also appear uninvited in a meeting. Without controlling who can join a meeting (through meeting passwords or a waiting room), anyone can join a meeting once they learn an ID of an ongoing session, either just for trolling or for some more specific purpose. This is called Zoombombing.
The simplest way to learn meeting IDs is through a search engine. If the meeting ID has been made public and indexed by Google, one could search for it by using some advanced search commands. Zoom links have a specific pattern (i.e., they contain "zoom.com") so it is quite easy to come up with the right search term. Zoom also indicated the meeting ID in the window title, meaning that if someone posted a screenshot of an ongoing (and not otherwise protected) meeting, anyone could join.
But once again, we can go further.
The meeting ID could be learned or guessed even if it was not leaked. It is an 11-digit (or even shorter) number that does not provide enough secrecy when systematically trying out all possible values with a script (i.e., brute forcing). Such a script can easily detect hitting a valid or wrong ID by sending requests with all possible values and detecting success by observing the responses.
Even though Zoom started to apply an account lock-out (rejecting responses from an endpoint that tried to connect with too many failed IDs), researchers have shown that by using a Tor network, one can easily evade this protection.
There are various best practices that we could implement, including handling and fixing injection problems, input validation, design UI, and information leakage. But let's summarize those high-level principles which, if obeyed, would have prevented all these vulnerabilities.
One of the key challenges related to IT systems lies at the Zoom incidents' core: user experience and security. They usually do not get along with each other well. When you improve something in one domain, you typically lose something else in the other. It is not easy to find the right balance, and Zoom initially failed at finding it.
Another high-level principle related to cryptography for developers is rather simple: "Don't do this at home." Just rely on well-established solutions and implementations of known quality. It is not only more efficient, but also more secure.
Related to Zoombombing, we can refer to the principle of fail-safe defaults. The principle says that from a security and authorization perspective, a system's default behavior should always be to deny. This principle states that you always need to have a good reason to allow something. Otherwise, you should reject it.
Since these issues have been found and made public, Zoom has invested heavily to fix them. The app does not make hyperlinks from URLs or UNCs anymore. Instead, it takes end-to-end encryption and the SRTP protocol seriously and meeting IDs are handled more securely. For example, they are not shown in the window title anymore.
Whether all this will be enough to avoid long-term reputation damage is still a question. Nevertheless, Zoom's story teaches us an important lesson: if you think that you can develop code without secure coding literacy, you are mistaken. You cannot get away with it. Sooner or later, the vulnerabilities will catch up with you.
Written by Ernesto JegesErnesto is a seasoned security professional with ten years' experience educating software professionals on secure coding practices in C/C++, Java, C#, Python, and many other languages and platforms. As Cydrill's founder and lead instructor, Ernesto helps programmers master secure coding best practices through delivering engaging and highly technical live security training.
Our live, instructor-led lectures are far more effective than pre-recorded classes
If your team is not 100% satisfied with your training, we do what's necessary to make it right
Whether you are at home or in the office, we make learning interactive and engaging
We accept check, ACH/EFT, major credit cards, and most purchase orders
New York City
Salt Lake City