data protection

Auto Added by WPeMatico

Zoom admits some calls were routed through China by mistake

Hours after security researchers at Citizen Lab reported that some Zoom calls were routed through China, the video conferencing platform has offered an apology and a partial explanation.

To recap, Zoom has faced a barrage of headlines this week over its security policies and privacy practices, as hundreds of millions forced to work from home during the coronavirus pandemic still need to communicate with each other.

The latest findings landed earlier today when Citizen Lab researchers said that some calls made in North America were routed through China — as were the encryption keys used to secure those calls. But as was noted this week, Zoom isn’t end-to-end encrypted at all, despite the company’s earlier claims, meaning that Zoom controls the encryption keys and can therefore access the contents of its customers’ calls. Zoom said in an earlier blog post that it has “implemented robust and validated internal controls to prevent unauthorized access to any content that users share during meetings.” The same can’t be said for Chinese authorities, however, which could demand Zoom turn over any encryption keys on its servers in China to facilitate decryption of the contents of encrypted calls.

Zoom now says that during its efforts to ramp up its server capacity to accommodate the massive influx of users over the past few weeks, it “mistakenly” allowed two of its Chinese data centers to accept calls as a backup in the event of network congestion.

From Zoom’s CEO Eric Yuan:

During normal operations, Zoom clients attempt to connect to a series of primary datacenters in or near a user’s region, and if those multiple connection attempts fail due to network congestion or other issues, clients will reach out to two secondary datacenters off of a list of several secondary datacenters as a potential backup bridge to the Zoom platform. In all instances, Zoom clients are provided with a list of datacenters appropriate to their region. This system is critical to Zoom’s trademark reliability, particularly during times of massive internet stress.”

In other words, North American calls are supposed to stay in North America, just as European calls are supposed to stay in Europe. This is what Zoom calls its data center “geofencing.” But when traffic spikes, the network shifts traffic to the nearest data center with the most available capacity.

China, however, is supposed to be an exception, largely due to privacy concerns among Western companies. But China’s own laws and regulations mandate that companies operating on the mainland must keep citizens’ data within its borders.

Zoom said in February that “rapidly added capacity” to its Chinese regions to handle demand was also put on an international whitelist of backup data centers, which meant non-Chinese users were in some cases connected to Chinese servers when data centers in other regions were unavailable.

Zoom said this happened in “extremely limited circumstances.” When reached, a Zoom spokesperson did not quantify the number of users affected.

Zoom said that it has now reversed that incorrect whitelisting. The company also said users on the company’s dedicated government plan were not affected by the accidental rerouting.

But some questions remain. The blog post only briefly addresses its encryption design. Citizen Lab criticized the company for “rolling its own” encryption — otherwise known as building its own encryption scheme. Experts have long rejected efforts by companies to build their own encryption, because it doesn’t undergo the same scrutiny and peer review as the decades-old encryption standards we all use today.

Zoom said in its defense that it can “do better” on its encryption scheme, which it says covers a “large range of use cases.” Zoom also said it was consulting with outside experts, but when asked, a spokesperson declined to name any.

Bill Marczak, one of the Citizen Lab researchers that authored today’s report, told TechCrunch he was “cautiously optimistic” about Zoom’s response.

“The bigger issue here is that Zoom has apparently written their own scheme for encrypting and securing calls,” he said, and that “there are Zoom servers in Beijing that have access to the meeting encryption keys.”

“If you’re a well-resourced entity, obtaining a copy of the internet traffic containing some particularly high-value encrypted Zoom call is perhaps not that hard,” said Marcak.

“The huge shift to platforms like Zoom during the COVID-19 pandemic makes platforms like Zoom attractive targets for many different types of intelligence agencies, not just China,” he said. “Fortunately, the company has (so far) hit all the right notes in responding to this new wave of scrutiny from security researchers, and have committed themselves to make improvements in their app.”

Zoom’s blog post gets points for transparency. But the company is still facing pressure from New York’s attorney general and from two class-action lawsuits. Just today, several lawmakers demanded to know what it’s doing to protect users’ privacy.

Will Zoom’s mea culpas be enough?

How safe are school records? Not very, says student security researcher

If you can’t trust your bank, government or your medical provider to protect your data, what makes you think students are any safer?

Turns out, according to one student security researcher, they’re not.

Eighteen-year-old Bill Demirkapi, a recent high school graduate in Boston, Massachusetts, spent much of his latter school years with an eye on his own student data. Through self-taught pen testing and bug hunting, Demirkapi found several vulnerabilities in a his school’s learning management system, Blackboard, and his school district’s student information system, known as Aspen and built by Follett, which centralizes student data, including performance, grades, and health records.

The former student reported the flaws and revealed his findings at the Def Con security conference on Friday.

“I’ve always been fascinated with the idea of hacking,” Demirkapi told TechCrunch prior to his talk. “I started researching but I learned by doing,” he said.

Among one of the more damaging issues Demirkapi found in Follett’s student information system was an improper access control vulnerability, which if exploited could have allowed an attacker to read and write to the central Aspen database and obtain any student’s data.

Blackboard’s Community Engagement platform had several vulnerabilities, including an information disclosure bug. A debugging misconfiguration allowed him to discover two subdomains, which spat back the credentials for Apple app provisioning accounts for dozens of school districts, as well as the database credentials for most if not every Blackboard’s Community Engagement platform, said Demirkapi.

“School data or student data should be taken as seriously as health data. The next generation should be one of our number one priorities, who looks out for those who can’t defend themselves.”
Bill Demirkapi, security researcher

Another set of vulnerabilities could have allowed an authorized user — like a student — to carry out SQL injection attacks. Demirkapi said six databases could be tricked into disclosing data by injecting SQL commands, including grades, school attendance data, punishment history, library balances, and other sensitive and private data.

Some of the SQL injection flaws were blind attacks, meaning dumping the entire database would have been more difficult but not impossible.

In all, over 5,000 schools and over five million students and teachers were impacted by the SQL injection vulnerabilities alone, he said.

Demirkapi said he was mindful to not access any student records other than his own. But he warned that any low-skilled attacker could have done considerable damage by accessing and obtaining student records, not least thanks to the simplicity of the database’s password. He wouldn’t say what it was, only that it was “worse than ‘1234’.”

But finding the vulnerabilities was only one part of the challenge. Disclosing them to the companies turned out to be just as tricky.

Demirkapi admitted that his disclosure with Follett could have been better. He found that one of the bugs gave him improper access to create his own “group resource,” such as a snippet of text, which was viewable to every user on the system.

“What does an immature 11th grader do when you hand him a very, very, loud megaphone?” he said. “Yell into it.”

And that’s exactly what he did. He sent out a message to every user, displaying each user’s login cookies on their screen. “No worries, I didn’t steal them,” the alert read.

“The school wasn’t thrilled with it,” he said. “Fortunately, I got off with a two-day suspension.”

He conceded it wasn’t one of his smartest ideas. He wanted to show his proof-of-concept but was unable to contact Follett with details of the vulnerability. He later went through his school, which set up a meeting, and disclosed the bugs to the company.

Blackboard, however, ignored Demirkapi’s responses for several months, he said. He knows because after the first month of being ignored, he included an email tracker, allowing him to see how often the email was opened — which turned out to be several times in the first few hours after sending. And yet the company still did not respond to the researcher’s bug report.

Blackboard eventually fixed the vulnerabilities, but Demirkapi said he found that the companies “weren’t really prepared to handle vulnerability reports,” despite Blackboard ostensibly having a published vulnerability disclosure process.

“It surprised me how insecure student data is,” he said. “School data or student data should be taken as seriously as health data,” he said. “The next generation should be one of our number one priorities, who looks out for those who can’t defend themselves.”

He said if a teenager had discovered serious security flaws, it was likely that more advanced attackers could do far more damage.

Heather Phillips, a spokesperson for Blackboard, said the company appreciated Demirkapi’s disclosure.

“We have addressed several issues that were brought to our attention by Mr. Demirkapi and have no indication that these vulnerabilities were exploited or that any clients’ personal information was accessed by Mr. Demirkapi or any other unauthorized party,” the statement said. “One of the lessons learned from this particular exchange is that we could improve how we communicate with security researchers who bring these issues to our attention.”

Follet spokesperson Tom Kline said the company “developed and deployed a patch to address the web vulnerability” in July 2018.

The student researcher said he was not deterred by the issues he faced with disclosure.

“I’m 100% set already on doing computer security as a career,” he said. “Just because some vendors aren’t the best examples of good responsible disclosure or have a good security program doesn’t mean they’re representative of the entire security field.”