Privacy

Auto Added by WPeMatico

Facebook starts shipping Portal, clarifies privacy/ad policy

Planning to get in early on the Portal phenomenon? Facebook announced today that it’s starting to ship the video chat device. The company’s first true piece of devoted hardware comes in two configurations: the Echo Show-like Portal and the larger Portal+ . Which run $199 and $349, respectively. There’s also a two-fer $298 bundle on the smaller unit.

The device raised some privacy red flags since it was announced early last month. The company attempted to nip some of the those issues in the bud ahead of launch — after all, 2018 hasn’t been a great year for Facebook privacy. The site also hasn’t done itself any favors by offering some murky comments around data tracking and ad targeting in subsequent weeks.

With all that in mind, Facebook is also marking the launch with a blog post further spelling out Portal’s privacy policy. Top level, the company promises not to view or listen to video calls. Calls are also encrypted and all of the AI tech is performed locally on-device — IE not sent to its servers.

In the post, Facebook also promises to treat conversations on Portal the way it does all Messenger experience. That means while it won’t view the calls, it does indeed track data usage, which it may later use to serve up cross platform ads.

“When you make a Portal video call, we process the same device usage information as other Messenger-enabled devices,” Facebook writes. “This can include volume level, number of bytes received, and frame resolution — it can also include the frequency and length of your calls. Some of this information may be used for advertising purposes. For example, we may use the fact that you make lots of video calls to inform some of the ads you see. This information does not include the contents of your Portal video calls.”

In other words, it’s not collecting personally identifying data, but it tracking information. And honestly, if you have a Facebook account, you’ve already signed up for that. The question is whether you’re comfortable introducing an extra level and bringing it into your living room or kitchen.

Facial recognition startup Kairos founder continues to fight attempted takeover

There’s some turmoil brewing over at Miami-based facial recognition startup Kairos . Late last month, New World Angels President and Kairos board chairperson Steve O’Hara sent a letter to Kairos founder Brian Brackeen notifying him of his termination from the role of chief executive officer. The termination letter cited willful misconduct as the cause for Brackeen’s termination. Specifically, O’Hara said Brackeen misled shareholders and potential investors, misappropriated corporate funds, did not report to the board of directors and created a divisive atmosphere.

Kairos is trying to tackle the society-wide problem of discrimination in artificial intelligence. While that’s not the company’s explicit mission — it’s to provide authentication tools to businesses — algorithmic bias has long been a topic the company, especially Brackeen, has addressed.

Brackeen’s purported termination was followed by a lawsuit, on behalf of Kairos, against Brackeen, alleging theft, a breach of fiduciary duties — among other things. Brackeen, in an open letter sent a couple of days ago to shareholders — and one he shared with TechCrunch — about the “poorly constructed coup,” denies the allegations and details his side of the story. He hopes that the lawsuit will be dismissed and that he will officially be reinstated as CEO, he told TechCrunch. As it stands today, Melissa Doval who became CFO of Kairos in July, is acting as interim CEO.

“The Kairos team is amazing and resilient and has blown me away with their commitment to the brand,” Doval told TechCrunch. “I’m humbled by how everybody has just kind of stuck around in light of everything that has transpired.”

The lawsuit, filed on October 10 in Miami-Dade and spearheaded by Kairos COO Mary Wolff, alleges Brackeen “used his position as CEO and founder to further his own agenda of gaining personal notoriety, press, and a reputation in the global technology community” to the detriment of Kairos. The lawsuit describes how Brackeen spent less than 30 percent of his time in the company’s headquarters, “even though the Company was struggling financially.”

Other allegations detail how Brackeen used the company credit card to pay for personal expenses and had the company pay for a car he bought for his then-girlfriend. Kairos alleges Brackeen owes the company at least $60,000.

In his open letter, Brackeen says, “Steve, Melissa and Mary, as cause for my termination and their lawsuit against me, have accused me of stealing 60k from Kairos, comprised of non-work related travel, non-work related expenses, a laptop, and a beach club membership,” Brackeen wrote in a letter to shareholders. “Let’s talk about this. While I immediately found these accusations absurd, I had to consider that, to people on the outside of  ‘startup founder’ life— their claims could appear to be salacious, if not illegal.”

Brackeen goes on to say that none of the listed expenses — ranging from trips, meals, rides to iTunes purchases — were not “directly correlated to the business of selling Kairos to customers and investors, and growing Kairos to exit,” he wrote in the open letter. Though, he does note that there may be between $3,500 to $4,500 worth of charges that falls into a “grey area.”

“Conversely, I’ve personally invested, donated, or simply didn’t pay myself in order to make payroll for the rest of the team, to the tune of over $325,000 dollars,” he wrote. “That’s real money from my accounts.”

Regarding forcing Kairos to pay for his then-girlfriend’s car payments, Brackeen explains:

On my making Kairos ‘liable to make my girlfriend’s car payment’— in order to offset the cost of Uber rides to and from work, to meetings, the airport, etc, I determined it would be more cost effective to lease a car. Unfortunately, after having completely extended my personal credit to start and keep Kairos operating, it was necessary that the bank note on the car be obtained through her credit. The board approved the $700 per month per diem arrangement, which ended when I stopped driving the vehicle. Like their entire case— its not very sensational, when truthfully explained.

The company also claims Brackeen has interfered with the company and its affairs since his termination. Throughout his open letter, Brackeen refers to this as an “attempted termination” because, as advised by his lawyers, he has not been legally terminated. He also explains how, in the days leading up to his ouster, Brackeen was seeking to raise additional funding because in August, “we found ourselves in the position of running low on capital.” While he was presenting to potential investors in Singapore, Brackeen said that’s “when access to my email and documents was cut.”

He added, “I traveled to the other side of the world to work with my team on IP development and meet with the people who would commit to millions in investment— and was fired via voicemail the day after I returned.”

Despite the “termination” and lawsuit, O’Hara told TechCrunch via email that “in the interest of peaceful coexistence, we are open to reaching an agreement to allow Brian to remain part of the family as Founder, but not as CEO and with very limited responsibilities and no line authority.”

O’Hara also noted the company’s financials showed there was $44,000 in cash remaining at the end of September. He added, “Then reconcile it with the fact that Brian raised $6MM in 2018 and ask yourself, how does a company go through that kind of money in under 9 months.”

Within the next twelve days, there will be a shareholder vote to remove the board, as well as a vote to reinstate Brackeen as CEO, he told me. After that, Brackeen said he intends to countersue Doval, O’Hara and Wolff.

In addition to New World Angels, Kairos counts Kapor Capital, Backstage Capital and others as investors. At least one investor, Arlan Hamilton of Backstage Capital, has publicly come out in support of Brackeen.

I’m proud of @BrianBrackeen. I’m honored to be his friend. He has handled recent events with his company with grace and patience, and has every right to be screaming inside. I’ve got his back. And he & I only want the best for @LoveKairos.

Certain distractions will be fleeting.

— Arlan 👊🏾 (@ArlanWasHere) October 25, 2018

As previously mentioned, Brackeen has been pretty outspoken about the ethical concerns of facial recognition technologies. In the case of law enforcement, no matter how accurate and unbiased these algorithms are, facial recognition software has no business in law enforcement, Brackeen said at TechCrunch Disrupt in early September. That’s because of the potential for unlawful, excessive surveillance of citizens.

Given the government already has our passport photos and identification photos, “they could put a camera on Main Street and know every single person driving by,” Brackeen said.

And that’s a real possibility. In the last couple of months, Brackeen said Kairos turned down a government request from Homeland Security, seeking facial recognition software for people behind moving cars.

“For us, that’s completely unacceptable,” Brackeen said.

Whether that’s entirely unacceptable for Doval, the interim CEO of Kairos, is not clear. In an interview with TechCrunch, Doval said, “we’re committed to being a responsible and ethical vendor” and that “we’re going to continue to champion the elimination of algorithmic bias in artificial intelligence.” While that’s not a horrific thing to say, it’s much vaguer than saying, “No, we will not ever sell to law enforcement.”

Selling to law enforcement could be lucrative, but that comes with ethical risks and concerns. But if the company is struggling financially, maybe the pros could outweigh the cons.

Security flaw in ‘nearly all’ modern PCs and Macs exposes encrypted data

Most modern computers, even devices with disk encryption, are vulnerable to a new attack that can steal sensitive data in a matter of minutes, new research says.

In new findings published Wednesday, F-Secure said that none of the existing firmware security measures in every laptop it tested “does a good enough job” of preventing data theft.

F-Secure principal security consultant Olle Segerdahl told TechCrunch that the vulnerabilities put “nearly all” laptops and desktops — both Windows and Mac users — at risk.

The new exploit is built on the foundations of a traditional cold boot attack, which hackers have long used to steal data from a shut-down computer. Modern computers overwrite their memory when a device is powered down to scramble the data from being read. But Segerdahl and his colleague Pasi Saarinen found a way to disable the overwriting process, making a cold boot attack possible again.

“It takes some extra steps,” said Segerdahl, but the flaw is “easy to exploit.” So much so, he said, that it would “very much surprise” him if this technique isn’t already known by some hacker groups.

“We are convinced that anybody tasked with stealing data off laptops would have already come to the same conclusions as us,” he said.

It’s no secret that if you have physical access to a computer, the chances of someone stealing your data is usually greater. That’s why so many use disk encryption — like BitLocker for Windows and FileVault for Macs — to scramble and protect data when a device is turned off.

But the researchers found that in nearly all cases they can still steal data protected by BitLocker and FileVault regardless.

After the researchers figured out how the memory overwriting process works, they said it took just a few hours to build a proof-of-concept tool that prevented the firmware from clearing secrets from memory. From there, the researchers scanned for disk encryption keys, which, when obtained, could be used to mount the protected volume.

It’s not just disk encryption keys at risk, Segerdahl said. A successful attacker can steal “anything that happens to be in memory,” like passwords and corporate network credentials, which can lead to a deeper compromise.

Their findings were shared with Microsoft, Apple, and Intel prior to release. According to the researchers, only a smattering of devices aren’t affected by the attack. Microsoft said in a recently updated article on BitLocker countermeasures that using a startup PIN can mitigate cold boot attacks, but Windows users with “Home” licenses are out of luck. And, any Apple Mac equipped with a T2 chip are not affected, but a firmware password would still improve protection.

Both Microsoft and Apple downplayed the risk.

Acknowledging that an attacker needs physical access to a device, Microsoft said it encourages customers to “practice good security habits, including preventing unauthorized physical access to their device.” Apple said it was looking into measures to protect Macs that don’t come with the T2 chip.

When reached, Intel would not to comment on the record.

In any case, the researchers say, there’s not much hope that affected computer makers can fix their fleet of existing devices.

“Unfortunately, there is nothing Microsoft can do, since we are using flaws in PC hardware vendors’ firmware,” said Segerdahl. “Intel can only do so much, their position in the ecosystem is providing a reference platform for the vendors to extend and build their new models on.”

Companies, and users, are “on their own,” said Segerdahl.

“Planning for these events is a better practice than assuming devices cannot be physically compromised by hackers because that’s obviously not the case,” he said.

UK report warns DeepMind Health could gain ‘excessive monopoly power’

DeepMind’s foray into digital health services continues to raise concerns. The latest worries are voiced by a panel of external reviewers appointed by the Google-owned AI company to report on its operations after its initial data-sharing arrangements with the U.K.’s National Health Service (NHS) ran into a major public controversy in 2016.

The DeepMind Health Independent Reviewers’ 2018 report flags a series of risks and concerns, as they see it, including the potential for DeepMind Health to be able to “exert excessive monopoly power” as a result of the data access and streaming infrastructure that’s bundled with provision of the Streams app — and which, contractually, positions DeepMind as the access-controlling intermediary between the structured health data and any other third parties that might, in the future, want to offer their own digital assistance solutions to the Trust.

While the underlying FHIR (aka, fast healthcare interoperability resource) deployed by DeepMind for Streams uses an open API, the contract between the company and the Royal Free Trust funnels connections via DeepMind’s own servers, and prohibits connections to other FHIR servers. A commercial structure that seemingly works against the openness and interoperability DeepMind’s co-founder Mustafa Suleyman has claimed to support.

There are many examples in the IT arena where companies lock their customers into systems that are difficult to change or replace. Such arrangements are not in the interests of the public. And we do not want to see DeepMind Health putting itself in a position where clients, such as hospitals, find themselves forced to stay with DeepMind Health even if it is no longer financially or clinically sensible to do so; we want DeepMind Health to compete on quality and price, not by entrenching legacy position,” the reviewers write.

Though they point to DeepMind’s “stated commitment to interoperability of systems,” and “their adoption of the FHIR open API” as positive indications, writing: “This means that there is potential for many other SMEs to become involved, creating a diverse and innovative marketplace which works to the benefit of consumers, innovation and the economy.”

“We also note DeepMind Health’s intention to implement many of the features of Streams as modules which could be easily swapped, meaning that they will have to rely on being the best to stay in business,” they add. 

However, stated intentions and future potentials are clearly not the same as on-the-ground reality. And, as it stands, a technically interoperable app-delivery infrastructure is being encumbered by prohibitive clauses in a commercial contract — and by a lack of regulatory pushback against such behavior.

The reviewers also raise concerns about an ongoing lack of clarity around DeepMind Health’s business model — writing: “Given the current environment, and with no clarity about DeepMind Health’s business model, people are likely to suspect that there must be an undisclosed profit motive or a hidden agenda. We do not believe this to be the case, but would urge DeepMind Health to be transparent about their business model, and their ability to stick to that without being overridden by Alphabet. For once an idea of hidden agendas is fixed in people’s mind, it is hard to shift, no matter how much a company is motivated by the public good.”

We have had detailed conversations about DeepMind Health’s evolving thoughts in this area, and are aware that some of these questions have not yet been finalised. However, we would urge DeepMind Health to set out publicly what they are proposing,” they add. 

DeepMind has suggested it wants to build healthcare AIs that are capable of charging by results. But Streams does not involve any AI. The service is also being provided to NHS Trusts for free, at least for the first five years — raising the question of how exactly the Google-owned company intends to recoup its investment.

Google of course monetizes a large suite of free-at-the-point-of-use consumer products — such as the Android mobile operating system; its cloud email service Gmail; and the YouTube video sharing platform, to name three — by harvesting people’s personal data and using that information to inform its ad targeting platforms.

Hence the reviewers’ recommendation for DeepMind to set out its thinking on its business model to avoid its intentions vis-a-vis people’s medical data being viewed with suspicion.

The company’s historical modus operandi also underlines the potential monopoly risks if DeepMind is allowed to carve out a dominant platform position in digital healthcare provision — given how effectively its parent has been able to turn a free-for-OEMs mobile OS (Android) into global smartphone market OS dominance, for example.

So, while DeepMind only has a handful of contracts with NHS Trusts for the Streams app and delivery infrastructure at this stage, the reviewers’ concerns over the risk of the company gaining “excessive monopoly power” do not seem overblown.

They are also worried about DeepMind’s ongoing vagueness about how exactly it works with its parent Alphabet, and what data could ever be transferred to the ad giant — an inevitably queasy combination when stacked against DeepMind’s handling of people’s medical records.

“To what extent can DeepMind Health insulate itself against Alphabet instructing them in the future to do something which it has promised not to do today? Or, if DeepMind Health’s current management were to leave DeepMind Health, how much could a new CEO alter what has been agreed today?” they write.

“We appreciate that DeepMind Health would continue to be bound by the legal and regulatory framework, but much of our attention is on the steps that DeepMind Health have taken to take a more ethical stance than the law requires; could this all be ended? We encourage DeepMind Health to look at ways of entrenching its separation from Alphabet and DeepMind more robustly, so that it can have enduring force to the commitments it makes.”

Responding to the report’s publication on its website, DeepMind writes that it’s “developing our longer-term business model and roadmap.”

“Rather than charging for the early stages of our work, our first priority has been to prove that our technologies can help improve patient care and reduce costs. We believe that our business model should flow from the positive impact we create, and will continue to explore outcomes-based elements so that costs are at least in part related to the benefits we deliver,” it continues.

So it has nothing to say to defuse the reviewers’ concerns about making its intentions for monetizing health data plain — beyond deploying a few choice PR soundbites.

On its links with Alphabet, DeepMind also has little to say, writing only that: “We will explore further ways to ensure there is clarity about the binding legal frameworks that govern all our NHS partnerships.”

“Trusts remain in full control of the data at all times,” it adds. “We are legally and contractually bound to only using patient data under the instructions of our partners. We will continue to make our legal agreements with Trusts publicly available to allow scrutiny of this important point.”

“There is nothing in our legal agreements with our partners that prevents them from working with any other data processor, should they wish to seek the services of another provider,” it also claims in response to additional questions we put to it.

We hope that Streams can help unlock the next wave of innovation in the NHS. The infrastructure that powers Streams is built on state-of-the-art open and interoperable standards, known as FHIR. The FHIR standard is supported in the UK by NHS Digital, NHS England and the INTEROPen group. This should allow our partner trusts to work more easily with other developers, helping them bring many more new innovations to the clinical frontlines,” it adds in additional comments to us.

“Under our contractual agreements with relevant partner trusts, we have committed to building FHIR API infrastructure within the five year terms of the agreements.”

Asked about the progress it’s made on a technical audit infrastructure for verifying access to health data, which it announced last year, it reiterated the wording on its blog, saying: “We will remain vigilant about setting the highest possible standards of information governance. At the beginning of this year, we appointed a full time Information Governance Manager to oversee our use of data in all areas of our work. We are also continuing to build our Verifiable Data Audit and other tools to clearly show how we’re using data.”

So developments on that front look as slow as we expected.

The Google-owned U.K. AI company began its push into digital healthcare services in 2015, quietly signing an information-sharing arrangement with a London-based NHS Trust that gave it access to around 1.6 million people’s medical records for developing an alerts app for a condition called Acute Kidney Injury.

It also inked an MoU with the Trust where the pair set out their ambition to apply AI to NHS data sets. (They even went so far as to get ethical signs-off for an AI project — but have consistently claimed the Royal Free data was not fed to any AIs.)

However, the data-sharing collaboration ran into trouble in May 2016 when the scope of patient data being shared by the Royal Free with DeepMind was revealed (via investigative journalism, rather than by disclosures from the Trust or DeepMind).

None of the ~1.6 million people whose non-anonymized medical records had been passed to the Google-owned company had been informed or asked for their consent. And questions were raised about the legal basis for the data-sharing arrangement.

Last summer the U.K.’s privacy regulator concluded an investigation of the project — finding that the Royal Free NHS Trust had broken data protection rules during the app’s development.

Yet despite ethical questions and regulatory disquiet about the legality of the data sharing, the Streams project steamrollered on. And the Royal Free Trust went on to implement the app for use by clinicians in its hospitals, while DeepMind has also signed several additional contracts to deploy Streams to other NHS Trusts.

More recently, the law firm Linklaters completed an audit of the Royal Free Streams project, after being commissioned by the Trust as part of its settlement with the ICO. Though this audit only examined the current functioning of Streams. (There has been no historical audit of the lawfulness of people’s medical records being shared during the build and test phase of the project.)

Linklaters did recommend the Royal Free terminates its wider MoU with DeepMind — and the Trust has confirmed to us that it will be following the firm’s advice.

“The audit recommends we terminate the historic memorandum of understanding with DeepMind which was signed in January 2016. The MOU is no longer relevant to the partnership and we are in the process of terminating it,” a Royal Free spokesperson told us.

So DeepMind, probably the world’s most famous AI company, is in the curious position of being involved in providing digital healthcare services to U.K. hospitals that don’t actually involve any AI at all. (Though it does have some ongoing AI research projects with NHS Trusts too.)

In mid 2016, at the height of the Royal Free DeepMind data scandal — and in a bid to foster greater public trust — the company appointed the panel of external reviewers who have now produced their second report looking at how the division is operating.

And it’s fair to say that much has happened in the tech industry since the panel was appointed to further undermine public trust in tech platforms and algorithmic promises — including the ICO’s finding that the initial data-sharing arrangement between the Royal Free and DeepMind broke U.K. privacy laws.

The eight members of the panel for the 2018 report are: Martin Bromiley OBE; Elisabeth Buggins CBE; Eileen Burbidge MBE; Richard Horton; Dr. Julian Huppert; Professor Donal O’Donoghue; Matthew Taylor; and Professor Sir John Tooke.

In their latest report the external reviewers warn that the public’s view of tech giants has “shifted substantially” versus where it was even a year ago — asserting that “issues of privacy in a digital age are if anything, of greater concern.”

At the same time politicians are also gazing rather more critically on the works and social impacts of tech giants.

Although the U.K. government has also been keen to position itself as a supporter of AI, providing public funds for the sector and, in its Industrial Strategy white paper, identifying AI and data as one of four so-called “Grand Challenges” where it believes the U.K. can “lead the world for years to come” — including specifically name-checking DeepMind as one of a handful of leading-edge homegrown AI businesses for the country to be proud of.

Still, questions over how to manage and regulate public sector data and AI deployments — especially in highly sensitive areas such as healthcare — remain to be clearly addressed by the government.

Meanwhile, the encroaching ingress of digital technologies into the healthcare space — even when the techs don’t even involve any AI — are already presenting major challenges by putting pressure on existing information governance rules and structures, and raising the specter of monopolistic risk.

Asked whether it offers any guidance to NHS Trusts around digital assistance for clinicians, including specifically whether it requires multiple options be offered by different providers, the NHS’ digital services provider, NHS Digital, referred our question on to the Department of Health (DoH), saying it’s a matter of health policy.

The DoH in turn referred the question to NHS England, the executive non-departmental body which commissions contracts and sets priorities and directions for the health service in England.

And at the time of writing, we’re still waiting for a response from the steering body.

Ultimately it looks like it will be up to the health service to put in place a clear and robust structure for AI and digital decision services that fosters competition by design by baking in a requirement for Trusts to support multiple independent options when procuring apps and services.

Without that important check and balance, the risk is that platform dynamics will quickly dominate and control the emergent digital health assistance space — just as big tech has dominated consumer tech.

But publicly funded healthcare decisions and data sets should not simply be handed to the single market-dominating entity that’s willing and able to burn the most resource to own the space.

Nor should government stand by and do nothing when there’s a clear risk that a vital area of digital innovation is at risk of being closed down by a tech giant muscling in and positioning itself as a gatekeeper before others have had a chance to show what their ideas are made of, and before even a market has had the chance to form. 

Ticketfly’s website is offline after a hacker got into its homepage and database

Following what it calls a “cyber incident,” the event ticket distributor Ticketfly took its homepage offline on Thursday morning. The company left this message on its website, which remains nonfunctional hours later:

Following a series of recent issues with Ticketfly properties, we’ve determined that Ticketfly has been the target of a cyber incident. Out of an abundance of caution, we have taken all Ticketfly systems temporarily offline as we continue to look into the issue. We are working to bring our systems back online as soon as possible. Please check back later.

For information on specific events please check the social media accounts of the presenting venues/promoters to learn more about availability/status of upcoming shows. In many cases, shows are still happening and tickets may be available at the door.

Before Ticketfly regained control of its site, a hacker calling themselves IsHaKdZ hijacked it to display apparent database files along with a Guy Fawkes mask and an email contact.

I sent an email yesterday reporting that the ticketfly website was hacked. All of the user data and site is completely downloadable. They need to come clean on the fact that your data was comprised and still is downloadable at this very moment! #ticketfly #cybercrime #wordpress pic.twitter.com/Ur0AsZpDij

— Michael Villado (@mvillado) May 31, 2018

According to correspondence with Motherboard, the hacker apparently demanded a single bitcoin (worth $7,502, at the time of writing) to divulge the vulnerability that left Ticketfly open to attack. Motherboard reports that it was able to verify the validity of at least six sets of user data listed in the hacked files, which included names, addresses, email addresses and phone numbers of Ticketfly customers, as well as some employees. We’ll update this story as we learn more.

Update: Ticketfly has added an FAQ page on the incident. The company notes that the event “resulted in the compromise of some client and customer information” and is conducting an investigation as it works to get its site back online.

Facebook didn’t see Cambridge Analytica breach coming because it was focused ‘on the old threat’

In light of the massive data scandal involving Cambridge Analytica around the 2016 U.S. presidential election, a lot of people wondered how something like that could’ve happened. Well, Facebook didn’t see it coming, Facebook COO Sheryl Sandberg said at the Code conference this evening.

“If you go back to 2016 and you think about what people were worried about in terms of nations, states or election security, it was largely spam and phishing hacking,” Sandberg said. “That’s what people were worried about.”

She referenced the Sony email hack and how Facebook didn’t have a lot of the problems other companies were having at the time. Unfortunately, while Facebook was focused on not screwing up in that area, “we didn’t see coming a different kind of more insidious threat,” Sandberg said.

Sandberg added, “We realized we didn’t see the new threat coming. We were focused on the old threat and now we understand that this is the kind of threat we have.”

Moving forward, Sandberg said, Facebook now understands the threat and that it’s better able to meet those threats leading in to future elections. On stage, Sandberg also said Facebook was not only late to discovering Cambridge Analytica’s unauthorized access to its data, but that Facebook still doesn’t know exactly what data Cambridge Analytica accessed. Facebook was in the midst of conducting its own audit when the U.K. government decided to conduct one of their own, therefore putting Facebook’s on hold.

“They didn’t have any data that we could’ve identified as ours,” Sandberg said. “To this day, we still don’t know what data Cambridge Analytica had.”

FBI reportedly overestimated inaccessible encrypted phones by thousands

The FBI seems to have been caught fibbing again on the topic of encrypted phones. FBI director Christopher Wray estimated in December that it had almost 7,800 phones from 2017 alone that investigators were unable to access. The real number is likely less than a quarter of that, The Washington Post reports.

Internal records cited by sources put the actual number of encrypted phones at perhaps 1,200 but perhaps as many as 2,000, and the FBI told the paper in a statement that “initial assessment is that programming errors resulted in significant over-counting of mobile devices reported.” Supposedly having three databases tracking the phones led to devices being counted multiple times.

Such a mistake would be so elementary that it’s hard to conceive of how it would be possible. These aren’t court notes, memos or unimportant random pieces of evidence, they’re physical devices with serial numbers and names attached. The idea that no one thought to check for duplicates before giving a number to the director for testimony in Congress suggests either conspiracy or gross incompetence.

The latter seems more likely after a report by the Office of the Inspector General that found the FBI had failed to utilize its own resources to access locked phones, instead suing Apple and then hastily withdrawing the case when its basis (a locked phone from a terror attack) was removed. It seems to have chosen to downplay or ignore its own capabilities in order to pursue the narrative that widespread encryption is dangerous without a backdoor for law enforcement.

An audit is underway at the Bureau to figure out just how many phones it actually has that it can’t access, and hopefully how this all happened.

It is unmistakably among the FBI’s goals to emphasize the problem of devices being fully encrypted and inaccessible to authorities, a trend known as “going dark.” That much it has said publicly, and it is a serious problem for law enforcement. But it seems equally unmistakable that the Bureau is happy to be sloppy, deceptive or both in its advancement of a tailored narrative.

NSA triples metadata collection numbers, sucking up over 500 million call records in 2017

The National Security Agency revealed a huge increase in the amount of call metadata collected, from about 151 million call records in 2016 to more than 530 million last year — despite having fewer targets. But officials say nothing is different about the year but the numbers.

A transparency report issued by the Office of the Director of National Intelligence shows numerous other fluctuations in the volume of surveillance conducted. Foreign surveillance-related, warrantless Section 702 content queries involving U.S. persons jumped from 5,288 to 7,512, for instance, and more citizens were “unmasked,” indicating a general increase in quantity.

On the other hand, the number of more invasive pen register/trace and tap orders dropped by nearly half, to 33, with even fewer targets — far less than the peak in 2014, when 135 orders targeted 516 people.

The biggest increase by far is the number of “call detail records” collected from service providers. Although the number of targets actually decreased from the previous year, from 42 to 40, the number of call records jumped from 151 million to 534 million, and search terms from 22,360 to 31,196.

Call detail records are things like which numbers were called and when, the duration of the call and so on — metadata, no content. But metadata can be just as revealing as content, since it can, for example, place a person near the scene of a crime, or establish that two people were connected even if the conversation they had was benign.

What do these increases mean? It’s hard to say. A spokesperson for the ODNI told Reuters that the government “has not altered the manner in which it uses its authority to obtain call detail records,” and that they “expect this number to fluctuate from year to year.” So according to them, it’s just a matter of quantity.

Because one target can yield hundreds or thousands of incidental sub-targets — people connected to the target whose call records will be requested and stored — it’s possible that 2017’s targets just had fatter, longer contact lists and deeper networks than 2016’s. Needless to say this explanation is unsatisfying.

Although the NSA’s surveillance apparatus was dealt a check with the 2013 Snowden leaks and subsequent half-hearted crackdowns by lawmakers, it clearly is getting back into its stride.

Facebook gets even shadier, limits EU privacy law reach

TwitterFacebook

Facebook is quietly looking to limit the number of users that will be protected by Europe’s tough new data law, according to Reuters.

Outside of the U.S. and Canada, Facebook’s users agree to terms and conditions that are tied with the social media company’s operation in Ireland. 

So, as the EU’s General Data Protection Regulation (GDPR) is set to come into force on May 25, even non-EU users would have had their data protected by the law on Facebook.

But now, Facebook is reportedly looking to ensure that GDPR only applies to European users next month, affecting 1.5 billion users in Australia, Africa, the Middle East and in Asia. Read more…

More about Tech, Facebook, Privacy, Data, and Social Media

Minds aims to decentralize the social network

Decentralization is the buzzword du jour. Everything – from our currencies to our databases – are supposed to exist, immutably, in this strange new world. And Bill Ottman wants to add our social media to the mix.

Ottman, an intense young man with a passion to fix the world, is the founder of Minds.com, a New York-based startup that has been receiving waves of new users as zealots and the the not-so-zealous have been leaving other networks. In fact, Zuckerberg’s bad news is music to Ottman’s ears.

Ottman started Minds in 2011 “with the goal of bringing a free, open source and sustainable social network to the world,” he said. He and his CTO, Mark Harding, have worked in various non-profits including Code To Inspire, a group that teaches Afghani women to code. He said his vision is to get us out from under social media’s thumb.

“We started Minds in my basement after being disillusioned by user abuse on Facebook and other big tech services. We saw spying, data mining, algorithm manipulation, and no revenue sharing,” he said. “To us, it’s inevitable that an open source social network becomes dominant, as was the case with Wikipedia and proprietary encyclopedias.”

His efforts have paid off. The team now has over 1 million registered users and over 105,000 monthly active users. They are working on a number of initiatives, including an ICO, and the site makes money through “boosting” – essentially the ability to pay to have a piece of content float higher in the feed.

The company raised $350K in 2013 and then a little over a million dollars in a Reg CF Equity Crowdfunding raise.

Unlike Facebook, Minds is built on almost radical transparency. The code is entirely open source and it includes encrypted messenger services and optional anonymity for users. The goal, ultimately, is to have the data be decentralized and any user should be able to remove his or her data. It’s also non-partisan, a fact that Ottman emphasized.

“We are not pushing a political agenda, but are more concerned with transparency, Internet freedom and giving control back to the user,” he said. “It’s a sad state of affairs when every network that cares about free speech gets lumped in with extremists.”

He was disappointed, for example, when people read that Reddit’s choice to shut down toxic sub-Reddits was a success. It wasn’t, he said. Instead, those users just flocked to other, more permissive sites. However, he doesn’t think those sites have be cesspools of hate.

“We are a community-owned social network dedicated to transparency, privacy and rewarding people for their contributions. We are called Minds because it’s meant to be a representation of the network itself,” he said. “Our mission is Internet freedom with privacy, transparency, free speech within the law and user control. Additionally, we want to provide our users with revenue opportunity and the ability to truly expand their reach and earn rewards for their contributions to the network.”

Australia also investigates Facebook following data scandal

TwitterFacebook

Facebook might be getting a “booting” Down Under.

The Office of the Australian Information Commissioner (OAIC) announced on Thursday it would open a formal investigation into the social media giant to see if it has breached Australia’s privacy laws. 

It follows news the personal information of 300,000 Australian Facebook users “may have been acquired and used without authorisation” as part of the Cambridge Analytica scandal that affected 87 million.

OAIC said it would work with foreign authorities on the investigation, “given the global nature of the matter.”  Read more…

More about Facebook, Australia, Privacy, Cambridge Analytica, and Tech

Highlights and audio from Zuckerberg’s emotional Q&A on scandals

“This is going to be a never-ending battle” said Mark Zuckerberg . He just gave the most candid look yet into his thoughts about Cambridge Analytica, data privacy, and Facebook’s sweeping developer platform changes today during a conference call with reporters. Sounding alternately vulnerable about his past negligence and confident about Facebook’s strategy going forward, Zuckerberg took nearly an hour of tough questions.

You can read a transcript here and listen to a recording of the call below:



The CEO started the call by giving his condolences to those affected by the shooting at YouTube yesterday. He then delivered this mea culpa on privacy:

We’re an idealistic and optimistic company . . . but it’s clear now that we didn’t do enough. We didn’t focus enough on preventing abuse and thinking through how people could use these tools to do harm as well . . . We didn’t take a broad enough view of what our responsibility is and that was a huge mistake. That was my mistake.

It’s not enough to just connect people. We have to make sure those connections are positive and that they’re bringing people together.  It’s not enough just to give people a voice, we have to make sure that people are not using that voice to hurt people or spread misinformation. And it’s not enough to give people tools to sign into apps, we have to make sure that all those developers protect people’s information too.

It’s not enough to have rules requiring that they protect the information. It’s not enough to believe them when they’re telling us they’re protecting information. We actually have to ensure that everyone in our ecosystem protects people’s information.”

This is Zuckerberg’s strongest statement yet about his and Facebook’s failure to anticipate worst-case scenarios, which has led to a string of scandals that are now decimating the company’s morale. Spelling out how policy means nothing without enforcement, and pairing that with a massive reduction in how much data app developers can request from users makes it seem like Facebook is ready to turn over a new leaf.

Here are the highlights from the rest of the call:

On Zuckerberg calling fake news’ influence “crazy”: “I clearly made a mistake by just dismissing fake news as crazy — as having an impact . . . it was too flippant. I never should have referred to it as crazy.

On deleting Russian trolls: Not only did Facebook delete 135 Facebook and Instagram accounts belonging to Russian government-connected election interference troll farm the Internet Research Agency, as Facebook announced yesterday. Zuckerberg said Facebook removed “a Russian news organization that we determined was controlled and operated by the IRA”.

On the 87 million number: Regarding today’s disclosure that up to 87 million people had their data improperly access by Cambridge Analytica, “it very well could be less but we wanted to put out the maximum that we felt it could be as soon as we had that analysis.” Zuckerberg also referred to The New York Times’ report, noting that “We never put out the 50 million number, that was other parties.”

On users having their public info scraped: Facebook announced this morning that “we believe most people on Facebook could have had their public profile scraped” via its search by phone number or email address feature and account recovery system. Scammers abused these to punch in one piece of info and then pair it to someone’s name and photo . Zuckerberg said search features are useful in languages where it’s hard to type or a lot of people have the same names. But “the methods of react limiting this weren’t able to prevent malicious actors who cycled through hundreds of thousands of IP addresses and did a relatively small number of queries for each one, so given that and what we know to day it just makes sense to shut that down.”

On when Facebook learned about the scraping and why it didn’t inform the public sooner: This was my question, and Zuckerberg dodged, merely saying “We looked into this and understood it more over the last few days as part of the audit of our overall system”, while declining to specify when Facebook first identified the issue.

On implementing GDPR worldwide: Zuckerberg refuted a Reuters story from yesterday saying that Facebook wouldn’t bring GDPR privacy protections to the U.S. and elsewhere. Instead he says, “we’re going to make all the same controls and settings available everywhere, not just in Europe.”

On if board has discussed him stepping down as chairman: “Not that I’m aware of” Zuckerberg said happily.

On if he still thinks he’s the best person to run Facebook: “Yes. Life is about learning from the mistakes and figuring out what you need to do to move forward . . . I think what people should evaluate us on is learning from our mistakes . . .and if we’re building things people like and that make their lives better . . . there are billions of people who love the products we’re building.”

On the Boz memo and prioritizing business over safety: “The things that makes our product challenging to manage and operate are not the tradeoffs between people and the business. I actually think those are quite easy because over the long-term, the business will be better if you serve people. I think it would be near-sighted to focus on short-term revenue over people, and I don’t think we’re that short-sighted. All the hard decisions we have to make are tradeoffs between people. Different people who use Facebook have different needs. Some people want to share political speech that they think is valid, and other people feel like it’s hate speech . . . we don’t always get them right.”

On whether Facebook can audit all app developers: “We’re not going to be able to go out and necessarily find every bad use of data” Zuckerberg said, but confidently said “I actually do think we’re going to be be able to cover a large amount of that activity.

On whether Facebook will sue Cambridge Analytica: “We have stood down temporarily to let the [UK government] do their investigation and their audit. Once that’s done we’ll resume ours … and ultimately to make sure none of the data persists or is being used improperly. And at that point if it makes sense we will take legal action if we need to do that to get people’s information.”

On how Facebook will measure its impact on fixing privacy: Zuckerberg wants to be able to measure “the prevalence of different categories of bad content like fake news, hate speech, bullying, terrorism. . . That’s going to end up being the way we should be held accountable and measured by the public . . .  My hope is that over time the playbook and scorecard we put out will also be followed by other internet platforms so that way there can be a standard measure across the industry.”

On whether Facebook should try to earn less money by using less data for targeting “People tell us if they’re going to see ads they want the ads to be good . . . that the ads are actually relevant to what they care about . . On the one hand people want relevant experiences, and on the other hand I do think there’s some discomfort with how data is used in systems like ads. But I think the feedback is overwhelmingly on the side of wanting a better experience. Maybe it’s 95-5.”

On whether #DeleteFacebook has had an impact on usage or ad revenue: “I don’t think there’s been any meaningful impact that we’ve observed…but it’s not good.”

On the timeline for fixing data privacy: “This is going to be a never-ending battle. You never fully solve security. It’s an arms race” Zuckerberg said early in the call. Then to close Q&A, he said “I think this is a multi-year effort. My hope is that by the end of this year we’ll have turned the corner on a lot of these issues and that people will see that things are getting a lot better.”

Overall, this was the moment of humility, candor, and contrition Facebook desperately needed. Users, developers, regulators, and the company’s own employees have felt in the dark this last month, but Zuckerberg did his best to lay out a clear path forward for Facebook. His willingness to endure this question was admirable, even if he deserved the grilling.

The company’s problems won’t disappear, and its past transgressions can’t be apologized away. But Facebook and its leader have finally matured past the incredulous dismissals and paralysis that characterized its response to past scandals. It’s ready to get to work.

Twitter hits back again at claims that its employees monitor direct messages

 Twitter is pushing back against claims made by conservative activist group Project Veritas that its employees monitor private user data, including direct messages. In a statement to BuzzFeed News, a Twitter representative said “we do not proactively review DMs. Period. A limited number of employees have access to such information, for legitimate work purposes, and we enforce strict… Read More

Privitar raises $16M to help ensure privacy in big data analytics

Data flying over group of laptops to illustrate data integration/sharing. As data protection — a set of laws and practices created across different markets to ensure that our sensitive information does not get leaked or shared without our permission — continues to gain priority in our rapidly expanding digital world, a UK startup called Privitar that is building tools to help organizations keep that data private has picked up $16 million in funding… Read More

Powered by WPeMatico

CMU researchers create a huge dome that can read body language

 The Panoptic Studio is a new body scanner created by researchers at Carnegie Mellon University that will be used to understand body language in real situations. The scanner, which looks like something Doc Brown would stick Marty in to prevent him from committing fratricide, creates hundreds of videos of participants inside the massive dome interacting, talking, and arguing. The team has… Read More

Powered by WPeMatico

How to lock down your iMessages so you don't snitch on yourself like Alabama's (former) governor

TwitterFacebook

We’ve seen politicians sunk by not-so-private tweets. But one of the most recent political scandals saw former Alabama Governor Robert Bentley taken down by his own texts.  

The politician, caught using public resources to cover up a torrid affair with a staffer, stepped down from office earlier this week (after the state already began impeachment hearings, mind you).  

There was plenty of evidence of the affair, including sexty iMessage exchanges between Bentley and his mistress, seen by his then-wife on her state-issued iPad, which was signed into the same Apple ID Bentley used on his state-issued iPhone.  Read more…

More about Privacy, Scandal, Governor, Alabama, and Imessage

Powered by WPeMatico

Major ISPs now say they won't sell your browsing history. Yeah. Right.

TwitterFacebook

Internet service providers are in an awkward spot. After getting all dressed up for the sell-your-data dance, it turns out they’ll be staying home. 

Or so they claim.

Reuters reports that representatives from Comcast, Verizon, and AT&T all came out today to assure worried consumers that the companies will not in fact sell customers’ browsing histories to the highest bidder. 

“We do not sell our broadband customers’ individual web browsing history,” writes Comcast Chief Privacy Officer Gerard Lewis on the company’s blog. “We did not do it before the FCC’s rules were adopted, and we have no plans to do so.”   Read more…

More about Eff, Privacy, Internet, Comcast, and At T

Powered by WPeMatico

Your internet service provider shouldn't be allowed to spy on you, but they can (and will)

TwitterFacebook

Dane Jasper is cofounder and CEO of Sonic, the largest independent internet service provider in Northern California. 

Last week Senate Republicans voted to abolish vital internet privacy rules created by the Federal Communications Commission. Lobbyists for big telecom companies want these rules abolished, but Sonic disagrees, and we urge the House of Representatives to reconsider this attack on Americans’ privacy.

Consumers deserve their privacy when they use the Internet. Internet access is an essential part our lives today. The vibrant and dynamic ecosystem of amazing applications, tools, people and content has driven the growth of the internet, which in turn has transformed every aspect of society — from business, government and education to our privates lives. And it’s precisely the openness of the internet that has fueled this prosperity; its integrity is now being put into question.     Read more…

More about Business, Data Privacy, Internet Privacy, Internet Service Providers, and Privacy

Powered by WPeMatico

Social media firms facing fresh political pressure after London terror attack

 Yesterday UK government ministers once again called for social media companies to do more to combat terrorism. “There should be no place for terrorists to hide,” said Home Secretary Amber Rudd, speaking on the BBC’s Andrew Marr program. Read More

Powered by WPeMatico

Senate debates permanent rollback of FCC’s broadband privacy rules

 Republican Senators led by Arizona’s Jeff Flake proposed a resolution earlier this month that would roll back privacy rules adopted by the FCC last year that prevented ISPs from collecting personal data without asking permission first. Today the Senate was alive with oratory as people spoke for and against the proposal. Read More

Powered by WPeMatico

Trump's FCC wants to let your cable company sell your data, because who cares about privacy?

TwitterFacebook

Under President Donald Trump, it seems like every department in the executive branch is racing to see who can undo regulations the fastest. And at the FCC? That means negotiating with cable companies about your data. 

Newly-installed Federal Communications Commission chairman Ajit Pai is working on stopping privacy rules from the it-feels-like-oh-so-long-ago Obama era, which require internet providers to get your explicit permission before selling or sharing your information, Business Insider reported

The rules were approved in October, and went into partial effect in January. But lucky for the Trump FCC, a provision requiring internet providers to “engage in reasonable data security practices” doesn’t take effect until March 2.  Read more…

More about Net Neutrality, Ajit Pai, Privacy, Federal Communications Commission, and Fcc

Powered by WPeMatico

Creepy new browser-tracking technique means there's nowhere left for you to hide

TwitterFacebook

These days, everyone on the internet already knows if you’re a dog. Thanks to a newly developed tracking technique, they may soon know even more. 

Pennsylvania-based computer science professor Yinzhi Cao just unveiled a method that IEEE Spectrum reports makes “fingerprinting” across multiple web browsers possible — with a striking degree of accuracy. That means anyone looking to follow you around the internet — advertisers, credit card companies, or websites — can now do so even if you habitually switch from Firefox to Chrome to Safari. 

More about Technology, Tech, Online Security, Privacy, and Tech

Powered by WPeMatico

One telecom carrier is fixing a major privacy problem you probably don't know about

TwitterFacebook

A telecom operator is trying to fix a major privacy problem that many of us are unaware about. 

Vodafone, India’s second largest telecom operator by subscribers, has introduced a new way for its subscribers, especially women, to top up talktime credit to their phones without disclosing their phone number to strangers. 

The new program, called Private Recharge Mode, allows people to add credit to their phones with a unique code instead of their phone number.   

It might sound like a non-issue to many, but in India, where over 90% of the billion mobile phone users are on prepaid connection, walk into mom-and-pop shops and hand out their phone numbers to top up their accounts.  Read more…

More about Privacy, Telecommunications, Vodafone, India, and Tech

Powered by WPeMatico

How to legally cross a US (or other) border without surrendering your data and passwords

The combination of 2014’s Supreme Court decision in Riley (which held that the data on your devices was subject to suspicionless border-searches, and suggested that you simply not bring any data you don’t want stored and shared by US government agencies with you when you cross the border) and Trump’s announcement that people entering the USA will be required to give border officers their social media passwords means that a wealth of sensitive data on our devices and in the cloud is now liable to search and retention when we cross into the USA.
(more…)

Powered by WPeMatico

Have your devices and social media been invasively searched at the US border? EFF wants to know about it

After the chaos of the Muslim ban, EFF activists are worried that the TSA’s existing policy of invasive data-collection at the border may be getting even worse. They’re looking for stories from everyone, but especially citizens and green card holders.
(more…)

Powered by WPeMatico

You might have to prove your identity before topping up your phone credit in India

Https%3a%2f%2fblueprint-api-production.s3.amazonaws.com%2fuploads%2fcard%2fimage%2f374767%2f76843c6f-f282-4260-976e-4feb2c15c7e5

Feed-twFeed-fb

In India, people may soon be asked to prove their identity before topping credit to their pay as you go phone.

In a move to fight digital fraud, the country’s Supreme Court has asked the government to make it mandatory for prepaid mobile phone subscribers to disclose their identity before adding refilling credit to their account. 

A bench headed by Chief Justice of India JS Khehar made the announcement Monday addressing a plea brought to the court. It has given the government a year to implement the program in a phased manner.  Read more…

More about Business, Privacy, Aadhaar, Phone, and Mobile

Powered by WPeMatico

WTF is a backdoor?

wtf-backdoor For the authorities, encryption is a calamity. Where once they could pry open drawers to find incriminating letters, or force a company to reveal private records, now everything depends on the owner’s willingness to allow their data to be decrypted. So since they can’t go through the front door, they have asked repeatedly for a back door. But what exactly is a backdoor, and why… Read More

Powered by WPeMatico

Twitter releases national security letters

twitter-ban-speech-gray Over the last eight months, tech companies have slowly been revealing that they’ve received national security letters from the Federal Bureau of Investigation that force the firms to secretly disclose user data to the government. Today, Twitter joined the ranks of Yahoo, Cloudflare and Google by announcing it had received two national security letters, one in 2015 and one in 2016. The… Read More

Powered by WPeMatico

Facebook is changing how it talks about privacy

Https%3a%2f%2fblueprint-api-production.s3.amazonaws.com%2fuploads%2fcard%2fimage%2f361992%2f90b3f6e7-216e-467d-ac7a-9e3fc68786be

Feed-twFeed-fb

Facebook is now making it easier to keep your information private and secure.

As part of Data Privacy Day on Jan. 28, Facebook is launching a new version of its Privacy Basics page to help people understand how to take control of their information on the site. 

SEE ALSO: Your friends might be spying on your Facebook when you’re not looking

The new site is mobile-friendly and redesigned based on user feedback. Facebook is also partnering with state attorneys general, privacy experts and others to help users understand how to manage their privacy online. There are 32 guides in 44 languages on the site, covering topics like managing your privacy, customizing who can view different parts of your profile and ways to increase account security. Read more…

More about Security, Privacy, Facebook, and Tech

Powered by WPeMatico

After shutting down to protect user privacy, Lavabit rises from the dead

In 2013, Lavabit — famous for being the privacy-oriented email service chosen by Edward Snowden to make contact with journalists while he was contracting for the NSA — shut down under mysterious, abrupt circumstances, leaving 410,000 users wondering what had just happened to their email addresses.

(more…)

Powered by WPeMatico

The cost of hot selfie app Meitu? A healthy dose of your personal info

trump-meitu-w710-h473-2x You’ve probably seen a Meitu selfie in your Instagram or Facebook feed in the past 24 hours. The app smoothes skin, slims down faces, and even applies a layer of virtual blush and lipgloss, adding a beautifying effect to your photos. And although the app has been popular in China for years — Meitu went public in Hong Kong last month — it only recently caught on with… Read More

Powered by WPeMatico

ProtonMail adds Tor onion site to fight risk of state censorship

ProtonMail onion site Swiss-based PGP end-to-end encrypted email provider, ProtonMail, now has an onion address, allowing users to access its service via a direct connection to the Tor anonymizing network — in what it describes as an active measure aimed at defending against state-sponsored censorship. Read More

Powered by WPeMatico