0:00 Introduction 1:18 Branding discussion 3:23 Handling preconceptions 4:54 What are coroutines? 7:17 Lightweight threads? 11:07 Where coroutines live 13:27 Sequence Builder Example 17:37 The design of coroutines 20:52 What Makes Coroutines special vs other languages? 26:56 Coroutines vs Loom 34:55 Easy to start, hard to master 41:07 Common mistakes 49:33 Flows 58:52 Thinking about Flows 1:02:41 Derailing the conversation 1:03:55 Flows for single values 1:12:27 Structured concurrency 1:18:53 The 4 advantages 1:24:40 Seb tries web dev / The web is broken?! 1:31:15 collectAsStateWithLifecycle 1:32:00 Gardening break 1:36:23 Scopes and contexts 1:43:22 Testing coroutines 1:50:29 Lincheck 1:51:32 Turbine 1:55:05 Coroutines Mastery course 2:01:43 Wrap-up
Like almost all human endeavors, open-source software development involves
a range of power dynamics. Companies, developers, and users are all
concerned with the power to influence the direction of the software — and,
often, to profit from it. At the 2025 Open
Source Summit Europe, Dawn Foster talked about how those dynamics can
play out, with an eye toward a couple of tactics — rug pulls and forks — that
are available to try to shift power in one direction or another.
The Elephant in The Biz: outsourcing of critical IT and cybersecurity functions risks UK economic security
Recently, there’s been three major UK ransomware and/or extortion incidents at three big UK companies — Co-op Group, Marks and Spencer and Jaguar Land Rover. One thing connects them all: in the past 5 years, they all outsourced key IT and cybersecurity services to TCS, aka Tata Consultancy Services. I’m not saying TCS are bad, or totally at fault. But I want to unpack what is happening here, as the wider context is important.
Estimates vary as to the cost of these incidents but the Cyber Monitoring Centre pegs the cost at Co-op and M&S at around half a billion pounds — and retail industry groups also land around that figure.
Marks and Spencer are still recovering systems several months later, and Co-op Group spent over a month without key IT systems.
With Marks and Spencer, their insurance provider suffered a “full tower loss”, which equates the cost going over M&S £100m cover. M&S expect the cyber insurance policy to cover around half the total cost. Co-op Group had no cyber insurance cover and so refused to pay the ransom, hence why they attracted the most escalation from the teenage hackers involved in the media.
Jaguar Land Rover are currently 15 days into a total car manufacturing shutdown. As I write this, over two weeks in, staff still have no idea when IT systems will be restored, so car manufacturing can restart.
Costs so far to Jaguar Land Rover are currently unknown — BBC report estimates of around £10m a day, so somewhere in the region of £150m so far. However, this is ‘just’ profit losses — when you factor in cyber incident response, legal fees and everything else — plus the fact the incident is still not resolved and services not recovered — it’s very possible this will significantly rise.
The Telegraph claim JLR are losing £72m a day, which would bring the current total to just over a billion pounds if accurate.
The result? These three incidents alone likely cost the orgs involved, all told, around a billion UK pounds. The only suspects arrested have been released on bail and haven’t been charged months later, and are mostly teenagers. Some of the suspects have had prior convictions in the UK for similar incidents… but simply kept keeping on keeping on.
Noman wishes you keep on keeping on
But here’s the thing. A billion quid. Sounds bad… but they’re private companies, so who cares?
The BBC reports Jaguar Land Rover made just over £2 billion in profit in the past year. They can afford to take a hit too. They’ve saved a lot of money by outsourcing to TCS, after all.
This following might make you care.
The BBC also reports that the downstream impact on Jaguar Land Rover’s suppliers — many small to medium sized businesses — is leading to staff being laid off. There are now growing calls for the UK government to set up a furlough scheme, at the taxpayer expense, to pay the suppliers to keep staff on, while Jaguar Land Rover try to recover their largely outsourced IT systems.
Essentially, we’ve ended up in a situation where to deliver shareholder value, large organisations are incentivised to outsource core IT and cybersecurity functions to a low cost managed service providers abroad — and then when hit with ransomware, the insurance will cover paying the ransom (some insurers will actually push for payment to criminal groups, to cover their potential losses).
This cycle plays into the ransomware economy, where the same criminal groups can then reinvest the money into purchasing exploits and gaining initial access to other organisations. Because ransomware is such big business, many of the groups have far bigger research and development funds than the organisations they’re attacking. Especially when the organisations they’re attacking have outsourced key areas to low cost providers.
The net effect is ransomware and extortion groups continue to gain access to more organisations, and risk UK economic security. It is only a matter of time before they hit some kind of essential UK service that directly impacts millions of people — by which point millions of people will be asking what is being done about the problem. And the answer is: not enough. When we’re at the stage of having to look at urgent furlough schemes for JLR’s suppliers to rightly save jobs, it isn’t so much a sign as the canary in the coalmine has died, but that the coalmine is also about to collapse on people.
How we got here
Co-op Group began it’s relationship with TCS over a decade ago, but really started to outsource key IT services to TCS around 2017. At the time I managed their Security Operations Centre. They outsourced their IT helpdesk — thought to the intrusion point for the incident — to TCS, transferring staff to TCS and ultimately making roles redundant.
At the time, I took this photo in the public lobby of 1 Angel Square, where a colleague member had written they were working on selling the company to Tata (TCS) as part of “Fuel for Growth”:
Colleagues were not happy
After I left the organisation in late 2019, they later fully outsourced my team, the Cyber Security Operations Centre, to TCS, along with various other key cybersecurity services. That team is tasked with detecting unauthorised access. They also centralised more IT teams, and then transferred those services to TCS too around 2020, making colleagues redundant in the process:
This resulted in redundancies. This included their IT helpdesk — also the point of entry for the incident. My understanding is, as this relationship progressed, they also started outsourcing elements of their cybersecurity function to TCS — including the team tasked for detecting unauthorised activity.
M&S recorded pretax profits of £876m in the past financial year.
Jaguar Land Rover follows a similar pattern. They outsourced key areas of IT to TCS. Then went on to outsource bits of cyber, including Security Operations, Governance Risk and Compliance, and Identity and Access Management to TCS. Although staff were transferred using TUPE, many were later made redundant. That TUPE pattern is repeated across the orgs.
JLR recorded pre-tax profits of £2.5 billion in the past financial year, their best performance in a decade.
TCS deny everything
They don’t. TCS deny any of their systems were breached. Their statements on the matter should be parsed carefully to see exactly what statement they are making or answering.
It is well known in the cyber industry that the LAPSUS$ kids were phoning helpdesks and asking for access, and getting it with ease. TCS provided this helpdesk service, shared across customers. When TCS have domain admin into environments and manage IT services, the question isn’t ‘were TCS breached?’. It’s ‘how were TCS’ customers breached and did you provide those services?’
It’s not a secret in the cyber industry that there’s a lot of stories about TCS — I’ve heard names like Terrible Cyber Service in the trenches. And the memes have been around for a while.
100000000000% certified
There’s also, you know, all the Reddit threads over the years, e.g.:
No. Managed Service Providers aren’t bad. For small businesses in particular, a great MSP can elevate an organisation to give it technology it wouldn’t be able to deploy and manage properly due to their scale.
However — when you’re talking about organisations with tens of thousands of employees, when they outsource areas like cyber risk and compliance, cyber security operation, password reset helpdesks etc — they take on a level of risk which, I think, becomes highly questionable. It’s not just risk — it’s risks that can and do materialise. That 10% budget saving doesn’t look so hot when the whole company has a heart attack.
MSPs rely on commonality to scale. They use, for example, teams of people who cover vast numbers of customers. They run IT helpdesks where, based on the phone number you call, you get a customised one in that companies name — e.g. TCS run a Microsoft frontline employee IT service desk. But that person answering the phone is spinning many plates and just sees the number you called, pulls up that company process, and runs through a script with you. It’s easy to abuse, and easy for the operator to make a human error.
MSPs use Standard Operating Procedures. They’ll be managing Active Directory, storage arrays, VMware clusters etc across thousands of other orgs. They write everything down. Everything is documented. If you’re an attacker, it’s easy to abuse. These things are the beating heart of a company.
It’s also the case that many MSPs pay incredibly poorly, and there are examples of staff at MSPs accepting bribes. Given the level of access they have — for example being able to reset MFA tokens for administrative users — paying incredibly low wages is not only risky, it’s really dumb.
Incentives are broken
Capitalism encourages cost reduction. CIOs want to, or in some cases have to, cut 10% off their budget each year. But when you get to the point where the UK government may have to use taxpayer money to pay JLR’s suppliers to not work, while JLR book record profits, we ought to ask ourselves — do the incentives here create economic risk to the UK?
With approaching a billion quid in losses, you’d think insurance providers would be devastated and on high alert. No. Insurance providers are very excited by the incidents, and are currently out in full force profiting from it:
Cyber incident response providers are equally loving it — stick any of these breaches into Google, or ransomware in general, and it is boom times. A large part of the cyber industry bottom line is, sadly, ransomware — which is why there continues to be a lobbying pushback around banning ransom payments.
Who isn’t loving ransomware? The victim orgs, the school children who see their schools close in incidents so regular they don’t make the news, people who can’t use council services for months on end in ransomware incidents which barely make the news… the list is long.
We’ve normalised ransomware.
The list will get longer as ransomware and extortion groups move on to things like airlines, food production, warehousing and other sectors. You might think — Kevin — they already do this. They’ve barely started. They have a target rich environment. There is not a shortage of victims.
Because they know large orgs have outsourced helpdesks to super low cost providers, the threat increases. Because they know orgs have outsourced key IT systems to providers who have 3940 other customers and they’re managing from flow charts and SOP documents, the risk increases.
Because organisations are busy trying to automate everything and put IT at the heart of everything to reduce cost, the risk and the threat increases.
When you combine cost pressures, capitalism, automation and a digital economy — there’s risks which have developed here. Many orgs are, essentially, in a race to the bottom when it comes to cost. Races to the bottom don’t end well.
Data protection
Ciaran Martin wrote a really good LinkedIn post which got me thinking:
So why are we still banging on about personal data in cases like this as if it’s the primary concern? It’s important. But car manufacturers don’t hold much very interesting data about their customers. The *primary* issue here is the disruption, not data loss.
Part of the problem is that right now we have comprehensive legal obligations to protect data but we don’t have comprehensive legal obligations to protect services. Even with the pending new legislation in the UK, it’s only the critically important companies that will be covered.
My personal view is that we need to take a long hard look at this (im)balance. Both data security and service continuity are important. But they’re quite different — it’s the organisational equivalent of suffering someone sneaking around your house copying your sensitive information, or having someone punch you in the face and break your legs. Both are unpleasant and damaging, but they’re very different experiences with very different impacts.
And yet law and practice tells us to worry about the former more than the latter. Isn’t that a bit weird?
He’s right. I hadn’t thought about it before. For example, the press has barely mentioned the Jaguar Land Rover incident after the first two days — save for when they admitted “some data” may be impacted. That became another news cycle. But… why? The primary impact here is the UK government may have to effective bail out the motor sector. Not that some data may have been taken.
Companies are hyper focused around legalisation — rightly so, and GDPR is proof that legalisation works. However, while the focus on data protection is highly visible at most large organisations, the focus on cyber resilience is — frankly — almost non-existent.
Many organisations think IT disaster recovery plans deal with ransomware. It doesn’t. The first thing ransomware groups do is delete backups and recovery systems, before they disrupt anything else. I’ve talked to business after business after business whose real plan with ransomware is simply: the insurance covers it, we’d pay. Anybody who has been in the trenches of these incidents will tell you that two things happen: your business IT has a heart attack, and paying does not equal restoration. In almost every case, even with payment, restoration takes weeks to months. The real risk — which often materialises — is somebody deliberately tries to set your head office on fire, but via IT. And in almost all cases, when that happens, the organisation doesn’t know what to do — and calls the NCSC and NCA like they’re the fire department. The fire department it is not.
If you look at Marks and Spencer’s website, they have a 3 page list of executives and C-levels who control every important element of the business — but there is nobody listed for cybersecurity. That role exists… but it isn’t even seen as important enough to name on the website. The same with Jaguar Land Rover and Co-op Group.
What I think the UK government should do
There’s a couple of pillars I think the UK can lead on:
Bring forward the legislation around forcing companies to disclose if they’ve paid a ransom, and banning critical infrastructure from paying ransoms.
Ask for plans to be prepared to ban payments of all cyber ransoms by or for UK companies. This does not mean it has to be implemented. This means there should be planning in place around how to do it, should we need to pull this lever. It’s also a signal of intent — including to boards that ‘just pay’ is a bad plan.
There needs to be education for very large organisations around the level of risk they take with third party service providers of absolutely critical services — some of these services should be in house, and properly managed, and ringfenced as cost of doing business.
There needs to be follow on exploration of legislation on cyber resilience around protecting key services. “BEING SOLD TO TATA”, as seen on the board above, is probably not just being written at the Co-op. It’s just that nobody outside realises it is happening.
There needs to be a plan to defuse the ransomware economy, even if that means pushing back against the cyber vendor industry. Incentives must be realigned.
I really do believe the UK can lead the way on this whole topic, and civil society would be better for it. I also believe we not only can, we must — the choice is going to be if we react when things have gone very wrong, or start acting now.
I’ve got my hands on an internet-connected camera and decided to take a closer look, having already read about security issues with similar cameras. What I found far exceeded my expectations: fake access controls, bogus protocol encryption, completely unprotected cloud uploads and firmware riddled with security flaws. One could even say that these cameras are Murphy’s Law turned solid: everything that could be done wrong has been done wrong here. While there is considerable prior research on these and similar cameras that outlines some of the flaws, I felt that the combination of severe flaws is reason enough to publish an article of my own.
My findings should apply to any camera that can be managed via the LookCam app. This includes cameras meant to be used with less popular apps of the same developer: tcam, CloudWayCam, VDP, AIBoxcam, IP System. Note that the LookCamPro app, while visually very similar, is technically quite different. It also uses the PPPP protocol for low-level communication but otherwise doesn’t seem to be related, and the corresponding devices are unlikely to suffer from the same flaws.
There seems to be little chance that things will improve with these cameras. I have no way of contacting either the hardware vendors or the developers behind the LookCam app. In fact, it looks like masking their identity was done on purpose here. But even if I could contact them, the cameras lack an update mechanism for their firmware. So fixing the devices already sold is impossible.
I have no way of knowing how many of these cameras exist. The LookCam app is currently listed with almost 1.5 million downloads on Google Play however. An iPhone and a Windows version of the app are also available but no public statistics exist here.
Contents
The highlights
The camera cannot be easily isolated from unauthorized access. It can either function as a WiFi access point, but setting a WiFi password isn’t possible. Or it can connect to an existing network, and then it will insist on being connected to the internet. If internet access is removed the camera will go into a reboot loop. So you have the choice of letting anybody in the vicinity access this camera or allowing it to be accessed from the internet.
The communication of this camera is largely unencrypted. The underlying PPPP protocol supports “encryption” which is better described as obfuscation, but the LookCam app almost never makes use of it. Not that it would be of much help, the proprietary encryption algorithms being developed without any understanding of cryptography. These rely on static encryption keys which are trivially extracted from the app but should be easy enough to deduce even from merely observing some traffic.
The camera firmware is riddled with buffer overflow issues which should be trivial to turn into arbitrary code execution. Protection mechanisms like DEP or ASLR might have been a hurdle but these are disabled. And while the app allows you to set an access password, the firmware doesn’t really enforce it. So access without knowing the password can be accomplished simply by modifying the app to skip the password checks.
The only thing preventing complete compromise of any camera is the “secret” device ID which has to be known in order to establish a connection. And by “secret” I mean that device IDs can generally be enumerated but they are “secured” with a five letter verification code. Unlike with some similar cameras, the algorithm used to generate the verification code isn’t public knowledge yet. So somebody wishing to compromise as many cameras as possible would need to either guess the algorithm or guess the verification codes by trying out all possible combinations. I suspect that both approaches are viable.
And while the devices themselves have access passwords which a future firmware version could in theory start verifying, the corresponding cloud service has no authentication beyond knowledge of the device ID. So any recordings uploaded to the cloud are accessible even if the device itself isn’t. Even if the camera owner hasn’t paid for the cloud service, anyone could book it for them if they know the device ID. The cloud configuration is managed by the server, so making the camera upload its recordings doesn’t require device access.
The hardware
Most cameras connecting to the LookCam app are being marketed as “spy cam” or “nanny cam.” These are made to look like radio clocks, USB chargers, bulb sockets, smoke detectors, even wall outlets. Most of the time their pretended functionality really works. In addition they have an almost invisible pinhole camera that can create remarkably good recordings. I’ve seen prices ranging from US$40 to hundreds of dollars.
The marketing spin says that these cameras are meant to detect when your house is being robbed. Or maybe they allow you to observe your baby while it is in the next room. Of course, in reality people are far more inventive in their use of tiny cameras. Students discovered them for cheating in exams. Gamblers use them to get an advantage at card games. And then there is of course the matter of non-consentual video recordings. So next time you stay somewhere where you don’t quite trust the host you might want to search for “LookCam” on YouTube, just to get an idea of how to recognize such devices.
The camera I had was based on the Anyka AK39Ev330 hardware platform, essentially an ARM CPU with an attached pinhole camera. Presumably, other cameras connecting to the LookCam app are similar, even though there are some provisions for hardware differences in the firmware. The device looked very convincing, its main giveaway being unexpected heat development.
All LookCam cameras I’ve seen were strictly noname devices, it is unclear who builds them. Given the variety of competing form factors I suspect that a number of hardware vendors are involved. Maybe there is one vendor producing the raw camera kit and several others who package it within the respective casings.
The LookCam app
The LookCam app can manage a number of cameras. Some people demonstrating the app on YouTube had around 50 of them, though I suspect that these are camera sellers and not regular users.
LookCam app as seen in the example screenshot
While each camera can be given a custom name, its unique ID is always visible as well. For example, the first camera listed in the screenshot above has the ID GHBB-000001-NRLXW which the apps shortens into G000001NRLXW. Here GHBB is the device prefix: LookCam supports a number of these but only BHCC, FHBB and GHBB seem to exist in reality (abbreviated as B, F and G respectively). 000001 is the device number, each prefix can theoretically support a million devices. The final part is a five-letter verification code: NRLXW. This one has to be known for the device connection to succeed, it makes enumerating device IDs more difficult.
Out of the box, the device is in access point mode: it provides a WiFi access point with the device ID used as wireless network name. You can connect to that access point, and LookCam will be able to find the camera via a network broadcast, allowing you to configure it. You might be inclined to leave the camera in access point mode but it is impossible to set a WiFi password. This means that anybody in the vicinity can connect to this WiFi network and access the camera through it. So there is no way around configuring the camera to connect to your network.
Once the camera is connected to your network the P2P “magic” happens. LookCam app can still find the camera via a network broadcast. But it can also establish a connection when you are not on the same network. In other words: the camera can be accessed from the internet, assuming that someone knows its device ID.
Exposing the camera to internet-based attacks might not be something that you want, with it being in principle perfectly capable of writing its recordings to an SD card. But if you deny it access to the internet (e.g. via a firewall rule) the camera will try to contact its server, fail, panic and reboot. It will keep rebooting until it receives a response from the server.
One thing to note is also: the device ID is displayed in pretty much every screen of this app. So when users share screenshots or videos of the app (which they do often) they will inevitably expose the ID of their camera, allowing anyone in the world to connect to it. I’ve seen very few cases of people censoring the device ID, clearly most of them aren’t aware that it is sensitive information. The LookCam app definitely isn’t communicating that it is.
The PPPP protocol
The basics
How can LookCam establish a connection to the camera having only its device ID? The app uses the PPPP protocol developed by the Chinese company CS2 Network. Supposedly, in 2019 CS2 Network had 300 customers with 20 million devices in total. This company supplies its customers with a code library and the corresponding server code which the customers can run as a black box. The idea of the protocol is providing an equivalent of the TCP protocol which implicitly locates a device by its ID and connects to it.
Slide from a CS2 Network sales pitch
Side note: Whoever designed this protocol didn’t really understand TCP. For example, they tried to replicate the fault tolerance of TCP. But instead of making retransmissions an underlying protocol feature there are dozens of different (not duplicated but really different) retransmission loops throughout the library. Where TCP tries to detect network congestions and back off the PPPP protocol will send even more retransmitted messages, rendering suboptimal connections completely unusable.
Despite being marketed as Peer-to-Peer (P2P) this protocol relies on centralized servers. Each device prefix is associated with a set of three servers, this being the protocol designers’ idea of high-availability infrastructure. Devices regularly send messages to all three servers, making sure that these are aware of the device’s IP address. When the LookCam app (client) wants to connect to a device, it also contacts all three servers to get the device’s IP address.
Slide from a CS2 Network sales pitch
The P2P part is the fact that device and client try to establish a direct connection instead of relaying all communication via a central server. The complicating factor here are firewalls which usually disallow direct connections. The developers didn’t like established approaches like Universal Plug and Play (UPnP), probably because these are often disabled for security reasons. So they used a trick called UDP hole punching. This involves guessing which port the firewall assigned to outgoing UDP traffic and then communicating with that port, so that the firewall considers incoming packets a response to previously sent UDP packets and allows them through.
Does that always work? That’s doubtful. So the PPPP protocol allows for relay servers to be used as fallback, forwarding traffic from and to the device. But this direct communication presumably succeeds often enough to keep the traffic on PPPP servers low, saving costs.
The FHBB and GHBB device prefixes are handled by the same set of servers, named the “mykj” network in the LookCam app internally. Same string appears in the name of the main class as well, indicating that it likely refers to the company developing the app. This seems to be a short form of “Meiyuan Keji,” a company name that translates as “Dollar Technology.” I couldn’t find any further information on this company however.
The BHCC device prefix is handled by a different set of servers that the app calls the “hekai” network. The corresponding devices appear to be marketed in China only.
The “encryption”
With potentially very sensitive data being transmitted one would hope that the data is safely encrypted in transit. The TCP protocol outsources this task to additional layers like TLS. The PPPP protocol on the other hand has built-in “encryption,” in fact even two different encryption mechanisms.
First there is the blanket encryption of all transmitted messages. The corresponding function is aptly named P2P_Proprietary_Encrypt and it is in fact a very proprietary encryption algorithm. To my untrained eye there are a few issues with it:
It is optional, with many networks choosing not to use it (like all networks supported by LookCam).
When present, the encryption key is part of the “init string” which is hardcoded in the app. It is trivial to extract from the application, even a file viewer will do if you know what to look for.
Even if the encryption key weren’t easily extracted, it is mashed into four bytes which become the effective key. So there are merely four billion possible keys.
Even if it weren’t possible to just go through all possible encryption keys, the algorithm can be trivially attacked via a known-plaintext attack. It’s sometimes even possible to deduce the effective key by passively observing a single four bytes MSG_HELLO message (it is known that the first four bytes message sent to port 32100 has the plaintext F1 00 00 00).
In addition to that, some messages get special treatment. For example, the MSG_REPORT_SESSION_READY message is generally encrypted via P2P_Proprietary_Encrypt function with a key that is hardcoded in the CS2 library and has the same value in every app I checked.
Some messages employ a different encryption method. In case of the networks supported by LookCam it is only the MSG_DEV_LGN_CRC message (device registering with the server) that is used instead of the plaintext MSG_DEV_LGN message. As this message is sent by the device, the corresponding encryption key is only present in the device firmware, not in the application. I didn’t bother checking whether the server would still accept the unencrypted MSG_DEV_LGN message.
The encryption function responsible here is PPPP_CRCEnc. No, this isn’t a cyclic redundancy check (CRC). It’s rather an encryption function that will extend the plaintext by a four bytes padding. The decryptor will validate the padding, presumably that’s the reason for the name.
Of course, this still doesn’t make it an authenticated encryption scheme, yet the padding oracle attack is really the least of its worries. While there is a complicated selection approach, it effectively results in a sequence of bytes that the plaintext is XOR’ed with. Same sequence for every single message being encrypted in this way. Wikipedia has the following to say on the security of XOR ciphers:
By itself, using a constant repeating key, a simple XOR cipher can trivially be broken using frequency analysis. If the content of any message can be guessed or otherwise known then the key can be revealed.
Well, yes. That’s what we have here.
It’s doubtful that any of these encryption algorithms can deter even a barely determined attacker. But a blanket encryption with P2P_Proprietary_Encrypt (which LookCam doesn’t enable) would have three effects:
Network traffic is obfuscated, making the contents of transmitted messages not immediately obvious.
Vulnerable devices cannot be discovered on the local network using the script developed by Paul Marrapese. This script relies on devices responding to an unencrypted search request.
P2P servers can no longer be discovered easily and won’t show up on Shodan for example. This discovery method relies on servers responding to an unencrypted MSG_HELLO message.
The threat model
It is obvious that the designers of the PPPP protocol don’t understand cryptography, yet for some reason they don’t want to use established solutions either. It cannot even be about performance because AES is supported in hardware on these devices. But why for example this strange choice of encrypting a particular message while keeping the encryption of highly private data optional? Turns out, this is due to the threat model used by the PPPP protocol designers.
Slide from a CS2 Network sales pitch
As a CS2 Network presentation deck shows, their threat model isn’t concerned about data leaks. The concern is rather denial-of-service attacks caused by registering fake devices. And that’s why this one message enjoys additional encryption. Not that I really understand the concern here, since the supposed hacker would still have to generate valid device IDs somehow. And if they can do that – well, them bringing the server down should really be the least concern.
But wait, there is another security layer here!
Slide from a CS2 Network sales pitch
This is about the “init string” already mentioned in the context of encryption keys above. It also contains the IP addresses of the servers, mildly obfuscated. While these were “given to platform owner only,” these are necessarily contained in the LookCam app:
Some other apps contain dozens of such init strings, allowing them to deal with many different networks. So the threat model of the PPPP protocol cannot imagine someone extracting the “encrypted P2P server IP string” from the app. It cannot imagine someone reverse engineering the (trivial) obfuscation used here. And it definitely cannot imagine someone reverse engineering the protocol, so that they can communicate with the servers via “raw IP string” instead of their obfuscated one. Note: The latter has happened on several documented occasions already, e.g. here.
These underlying assumptions become even more obvious on this slide:
Slide from a CS2 Network sales pitch
Yes, the only imaginable way to read out network data is via the API of their library. With a threat model like this, it isn’t surprising that the protocol makes all the wrong choices security-wise.
The firmware
Once a connection is etablished the LookCam app and the camera will exchange JSON-encoded messages like the following:
{"cmd":"LoginDev","pwd":"123456"}
A paper from the Warwick University already took a closer look at the firmware and discovered something surprising. The LookCam app will send a LoginDev command like above to check whether the correct access password is being used for the device. But sending this command is entirely optional, and the firmware will happily accept other commands without a “login”!
The LookCam app will also send the access password along with every other command yet this password isn’t checked by the firmware either. I tried adding a trivial modification to the LookCam app which made it ignore the result of the LoginDev command. And this in fact bypassed the authentication completely, allowing me to access my camera despite a wrong password.
I could also confirm their finding that the DownloadFile command will read arbitrary files, allowing me to extract the firmware of my camera with the approach described in the paper. They even describe a trivial Remote Code Execution vulnerability which I also found in my firmware: that firmware often relies on running shell commands for tasks that could be easily done in its C language code.
This clearly isn’t the only Remote Code Execution vulnerability however. Here is some fairly typical code for this firmware:
This code copies a string (pointlessly but this isn’t the issue here). It completely fails to consider the size of the target buffer, going by the size of the incoming data instead. So any command larger than 255 bytes will cause a buffer overflow. And there is no stack canary here, Data Execution Prevention (DEP) and Address Space Layout Randomization (ASLR) are disabled, so nothing prevents this buffer overflow from being turned into Remote Code Execution.
Finally, I’ve discovered that the searchWiFiList command will produce the list of WiFi networks visible to the camera. These by itself often already allow a good guess as to where the camera is located. In combination with a geolocation service these will typically narrow down the camera’s position to a radius of only a few dozen meters.
The only complication here: most geolocation services require not the network names but the MAC addresses of the access points. The MAC addresses aren’t part of the response data however. But: searchWiFiList works by running iwlist shell command and storing the complete output in /tmp/wifi_scan.txt file. It reads this file but does not remove it. This means that the file can subsequently be downloaded via DownloadFile command (allows reading arbitrary files as mentioned above) and that one contains full data including MAC addresses of all access points. So somebody who happened to learn the device ID can not only access the video stream but also find out where exactly this footage is being recorded.
The camera I’ve been looking at is running firmware version 2023-11-22. Is there a newer version, maybe one that fixes the password checks or the already published Remote Code Execution vulnerability? I have no idea. If the firmware for these cameras is available somewhere online then I cannot find it. I’ve also been looking for some kind of update functionality in these devices. But there is only a generic script from the Anyka SDK which isn’t usable for anyone other than maybe the hardware vendor.
The cloud
When looking at the firmware I noticed some code uploading 5 MiB data chunks to api.l040z.com (or apicn.l040z.com if you happen to own a BHCC device). Now uploading exactly 5 MiB is weird (this size is hardcoded) but inspecting the LookCam app confirmed it: this is cloud functionality, and the firmware regularly uploads videos in this way. At least it does that when cloud functionality is enabled.
First thing worth noting: while the cloud server uses regular HTTP rather than some exotic protocol, all connections to it are generally unencrypted. The firmware simply lacks a TLS library it could use, and so the server doesn’t bother with supporting TLS. Meaning for example: if you happen to use their cloud functionality your ISP better be very trustworthy because it can see all the data your camera sends to the LookCam cloud. In fact, your ISP could even run its own “cloud server” and the camera will happily send your recorded videos to it.
Anyone dare a guess what the app developers mean by “financial-grade encryption scheme” here? Is it worse or better than military-grade encryption?
Screenshot from the LookCam app
Second interesting finding: the cloud server has no authentication whatsoever. The camera only needs to know its device ID when uploading to the cloud. And the LookCam app – well, any cloud-related requests here also require device ID only. If somebody happens to learn your device ID they will gain full access to your cloud storage.
Now you might think that you can simply skip paying for the cloud service which, depending on the package you book, can come for as much as $40 per month. But this doesn’t mean that you are on the safe side because you aren’t the one controlling the cloud functionality on your device, the cloud server is. Every time the device boots up it sends a request to http://api.l040z.com/camera/signurl and the response tells it whether cloud functionality needs to be enabled.
So if LookCam developers decide that they want to see what your camera is doing (or if Chinese authorities become interested in that), they can always adjust that server response and the camera will start uploading video snapshots. You won’t even notice anything because the LookCam app checks cloud configuration by requesting http://api.l040z.com/app/cloudConfig which can remain unchanged.
And they aren’t the only ones who can enable the cloud functionality of your device. Anybody who happens to know your device ID can buy a cloud package for it. This way they can get access to your video recordings without ever accessing your device directly. And you will only notice the cloud functionality being active if you happen to go to the corresponding tab in the LookCam app.
How safe are device IDs?
Now that you are aware of device IDs being highly sensitive data, you certainly won’t upload screenshots containing them to social media. Does that mean that your camera is safe because nobody other than you knows its ID?
The short answer is: you don’t know that. First of all, you simply don’t know who already has your device ID. Did the shop that sold you the camera write the ID down? Did they maybe record a sales pitch featuring your camera before they sold it to you? Did somebody notice your camera’s device ID show up in the list of WiFi networks when it was running in access point mode? Did anybody coming to your home run a script to discover PPPP devices on the network? Yes, all of that might seem unlikely, yet it should be reason enough to wonder whether your camera’s recordings are really as private as they should be.
Then there is the issue of unencrypted data transfers. Whenever you connect to your camera from outside your home network the LookCam app will send all data unencrypted – including the device ID. Do you do that when connected to public WiFi? At work? In a vacation home? You don’t know who else is listening.
And finally there is the matter of verification codes which are the only mechanism preventing somebody from enumerating all device IDs. How difficult would it be to guess a verification code? Verification codes seem to use 22 letters (all Latin uppercase letters but A, I, O, Q). With five letters this means around 5 million possible combinations. According to Paul Marrapese PPPP servers don’t implement rate limiting (page 33), making trying out all these combinations perfectly realistic – maybe not for all possible device IDs but definitely for some.
But that resource-intensive approach is only necessary as long as the algorithm used to generate verification codes is a secret. Yet we have to assume that at least CS2 Network’s 300 customers have access to that algorithm, given that their server software somehow validates device IDs. Are they all trustworthy? How much would it cost to become a “customer” simply in order to learn that algorithm?
And even if we are willing to assume that CS2 Network runs proper background checks to ensure that their algorithm remains a secret: how difficult would it be to guess that algorithm? I found a number of device IDs online, and my primitive analysis of their verification codes indicates that these aren’t distributed equally. There is a noticeable affinity for certain prime numbers, so the algorithm behind them is likely a similar hack job as the other CS2 Network algorithms, throwing in mathematical operations and table lookups semi-randomly to make things look complicated. How long would this approach hold if somebody with actual cryptanalysis knowledge decided to figure this out?
Recommendations
So if you happen to own one of these cameras, what does all this mean to you? Even if you never disclosed the camera’s device ID yourself, you cannot rely on it staying a secret. And this means that whatever your camera is recording is no longer private.
Are you using it as a security camera? Your security camera might now inform potential thieves of the stuff that you have standing around and the times when you leave home. It will also let them know where exactly you live.
Are you using it to keep an eye on your child? Just… don’t. Even if you think that you yourself have a right to violate your child’s privacy, you really don’t want anybody else to watch.
And even if you “have nothing to hide”: somebody could compromise the camera in order to hack other devices on your network or to simply make it part of a botnet. Such things happened before, many times actually.
So the best solution is to dispose of this camera ASAP. Don’t sell it please because this only moves the problem to the next person. The main question is: how do you know that the camera you get instead will do better? I can only think of one indicator: if you want to access the camera from outside your network it should involve explicit setup steps, likely changing router configuration. The camera shouldn’t just expose itself to the internet automatically.
But if you actually paid hundreds of dollars for that camera and dumping it isn’t an option: running it in a safe manner is complicated. As I mentioned already, simply blocking internet access for the camera won’t work. This can be worked around but it’s complex enough to be not worth doing. You should be better off by installing a custom firmware. I haven’t tried it but at least this one looks like somebody actually thought about security.
Further reading
As far as I am aware, the first research on the PPPP protocol was published by Paul Marrapese in 2019. He found a number of vulnerabilities, including one brand of cameras shipping their algorithm to generate verification codes with their client application. Knowing this algorithm, device IDs could be enumerated easily. Paul used this flaw to display the locations of millions of affected devices. His DEF CON talk is linked from the website and well worth watching.
A paper from the Warwick University (2023) researched LookCam app specifically. In additions to some vulnerabilities I mentioned here it contains a number of details on how these cameras operate.
This Elastic Labs article (2024) took a close look at some other PPPP-based cameras, finding a number of issues.
The CS2 Network sales presentation (2016) offers a fascinating look into the thinking of PPPP protocol designers and into how their system was meant to work.