Facebook

Auto Added by WPeMatico

Facebook starts shipping Portal, clarifies privacy/ad policy

Planning to get in early on the Portal phenomenon? Facebook announced today that it’s starting to ship the video chat device. The company’s first true piece of devoted hardware comes in two configurations: the Echo Show-like Portal and the larger Portal+ . Which run $199 and $349, respectively. There’s also a two-fer $298 bundle on the smaller unit.

The device raised some privacy red flags since it was announced early last month. The company attempted to nip some of the those issues in the bud ahead of launch — after all, 2018 hasn’t been a great year for Facebook privacy. The site also hasn’t done itself any favors by offering some murky comments around data tracking and ad targeting in subsequent weeks.

With all that in mind, Facebook is also marking the launch with a blog post further spelling out Portal’s privacy policy. Top level, the company promises not to view or listen to video calls. Calls are also encrypted and all of the AI tech is performed locally on-device — IE not sent to its servers.

In the post, Facebook also promises to treat conversations on Portal the way it does all Messenger experience. That means while it won’t view the calls, it does indeed track data usage, which it may later use to serve up cross platform ads.

“When you make a Portal video call, we process the same device usage information as other Messenger-enabled devices,” Facebook writes. “This can include volume level, number of bytes received, and frame resolution — it can also include the frequency and length of your calls. Some of this information may be used for advertising purposes. For example, we may use the fact that you make lots of video calls to inform some of the ads you see. This information does not include the contents of your Portal video calls.”

In other words, it’s not collecting personally identifying data, but it tracking information. And honestly, if you have a Facebook account, you’ve already signed up for that. The question is whether you’re comfortable introducing an extra level and bringing it into your living room or kitchen.

New ‘Dark Ads’ pro-Brexit Facebook campaign may have reached over 10M people, say researchers

A major new campaign of disinformation around Brexit, designed to stir up U.K. ‘Leave’ voters, and distributed via Facebook, may have reached over 10 million people in the U.K., according to new research. The source of the campaign is so far unknown, and will be embarrassing to Facebook, which only this week claimed it was clamping down on “dark” political advertising on its platform.

Researchers for the U.K.-based digital agency 89up allege that Mainstream Network — which looks and reads like a “mainstream” news site but which has no contact details or reporter bylines — is serving hyper-targeted Facebook advertisements aimed at exhorting people in Leave-voting U.K. constituencies to tell their MP to “chuck Chequers.” Chequers is the name given to the U.K. Prime Ministers’s proposed deal with the EU regarding the U.K.’s departure from the EU next year.

89up says it estimates that Mainstream Network, which routinely puts out pro-Brexit “news,” could have spent more than £250,000 on pro-Brexit or anti-Chequers advertising on Facebook in less than a year. The agency calculates that with that level of advertising, the messaging would have been seen by 11 million people. TechCrunch has independently confirmed that Mainstream Network’s domain name was registered in November last year, and began publishing in February of this year.

In evidence given to Parliament’s Digital, Culture, Media and Sport Select Committee today, 89up says the website was running dozens of adverts targeted at Facebook users in specific constituencies, suggesting users “Click to tell your local MP to bin Chequers,” along with an image from the constituency, and an email function to drive people to send their MP an anti-Chequers message. This email function carbon-copied an info@mainstreamnetwork.co.uk email address. This would be a breach of the U.K.’s data protection rules, as the website is not listed as a data controller, says 89up.

The news comes a day after Facebook announced a new clampdown on political advertisement on its platform, and will put further pressure on the social media giant to look again at how it deals with the so-called “dark advertising” its Custom Audiences campaign tools are often accused of spreading.

89up claims Mainstream Network website could be in breach of new GDPR rules because, while collecting users’ data, it does not have a published privacy policy, or contain any contact information whatsoever on the site or the campaigns it runs on Facebook.

The agency says that once users are taken to the respective localized landing pages from ads, they are asked to email their MP. When a user does this, its default email client opens up an email and puts its own email in the BCC field (see below). It is possible, therefore, that the user’s email address is being stored and later used for marketing purposes by Mainstream Network.

TechCrunch has reached out to Mainstream Network for comment on Twitter and email. A WhoIs look-up revealed no information about the owner of the site.

TechCrunch’s own research into the domain reveals that the domain owner has made every possible attempt to remain anonymous. Even before GDPR came in, the domain owners had paid to hide its ownership on GoDaddy, where it is registered. The site is using standard GoDaddy shared hosting to blend in with 400+ websites using the same IP address.

Commenting, Damian Collins MP, the Chair of the Digital, Culture, Media and Sport Committee of the U.K. House of Commons, said: “We do not know who is funding the Mainstream Network, or who is behind its operations, but we can see that they are directing a large scale advertising campaign on Facebook designed to get people to lobby their MP to oppose the Prime Ministers’s Brexit strategy. I have been sent a series of emails from constituents as a result of these adverts, in a deliberate attempt to alter the outcome of the Brexit negotiations.”

“The issue for parliamentarians is we have no idea who is targeting whom via political advertising on Facebook, who is paying for it, and what the purpose of that communication is. Facebook claimed this week that it was working to make political advertising on their platform more transparent, but once again we see potentially hundreds of thousands of pounds being spent to influence the political process and no one knows who is behind this.”

Mike Harris, CEO of 89up said: “A day after Facebook announced it will no longer be taking ‘dark ads’, we see once again evidence of the huge problem the platform is yet to face up to. Facebook has known since the EU referendum that highly targeted political advertising was being placed on its platform by anonymous groups, yet has failed to do anything about it. We have found evidence of yet another anonymous pro-Brexit campaign placing potentially a quarter of a million pounds worth of advertising, without anyone knowing or being able to find out who they are.”

Josh Feldberg, 89up researcher, said: “We have no idea who is funding this campaign. Only Facebook do. For all we know this could be funded by thousands of pounds of foreign money. This case just goes to show that despite Facebook’s claims they’re fighting fake news, anonymous groups are still out there trying to manipulate MPs and public opinion using the platform. It is possible there has been unlawful data collection. Facebook must tell the public who is behind this group.”

TechCrunch has reached out to both Facebook and Mainstream Network for comment prior to publication and will update this post if either respond to the allegations.

We’re kicking off Startup Battlefield MENA, here are the startups and agenda

We’re kicking off Startup Battlefield MENA here in Beirut, where 15 startups will be taking the stage, along with speakers from Facebook (our partner on the event through its FB Start program), Instabug, Eventus, Wuzzuf, Careem and Myki.

For those of you who can’t be here in person, check back on TechCrunch later today, where we’ll be sharing videos and other highlights from the event. And of course, announcing the winner!

For the first time, TechCrunch is holding Startup Battlefield MENA in partnership with FB Start. After scouring does dozens of countries, sifting through hundreds and hundreds of extremely talented startups, TechCrunch selected 15 elite companies across the region to compete in prestigious global Startup Battlefield competition for $25,000 equity-free prize, a trip for 2 to TechCrunch Disrupt San Francisco 2019 and the coveted title of “Middle East & North Africa’s Favorite Startup”.

After weeks of intense coaching from the TC team, these startups are primed for international launch. For the semi-final round, each founder will pitch for 6 minutes, with a live demo on stage, followed by 6 minutes of Q&A with our expert panel of judges. After, our judges will deliberate and 5 teams will be selected to compete in the final round of Startup Battlefield – same pitch, but with an even more intense Q&A.

So, who are these chosen few? From creating new forms of fast setting concrete to quickly build houses in areas recovering from natural disasters to agricultural monitoring technology preventing water-related conflict, this batch of companies is truly changing the world. Companies also include financial investment AI platforms, edible insect based protein powder, to culturally relevant dating apps. Founders in the automotive industry are poised to change everything from how we pick the cars we want to buy to how we optimize their maintenance. From innovations to hydroponic gardens, educational tutoring platforms to modernizing technology for hotel chains, Startup Battlefield MENA is set to highlight the regions most promising startups. Videos from the event will be posted on TechCrunch.com after the event. Stay tuned!

Session 1: 9:30am – 10:30am

BuildinkHarmonicaMaterialSolvedMoneyFellowsNeotic AI

Session 2: 11:10am – 12:10am

NutransaSeabex by IT GrapesIN2SeezAutotell 

Session 3: 1:40pm – 2:40pm

SynkersVerboseMakerbraneArgineeringPureHarvest


Welcome Remarks
9:05 am – 9:25 am

Infrastructure and Connectivity: A Regional Perspective with Imad Kreidieh (Ogero Telecom) and Ari Kesisoglu (Facebook)
Access to the internet and connectivity is the driving force for the 4th industrial revolution. Join a conversation about how the Telco industry is changing in Lebanon and the region, and what that means for businesses and consumers. Sponsored by Facebook

9:25 am – 10:30 am

Startup Battlefield Competition – Flight #1
TechCrunch’s iconic startup competition is here and for the first time in MENA, as entrepreneurs from around the region pitch expert judges and vie for US$25,000 no-equity cash prize and a trip for two to compete in the Startup Battlefield at TechCrunch Disrupt in 2019.

10:30 am – 10:50 am

BREAK
10:50 am – 11:10 am

Jennifer Fong (Facebook)
Hear from Facebook’s head of the Developer Circles Program about their work with developers, startups and businesses to build, grow, measure, and monetize using Facebook and Messenger platform products. Sponsored by Facebook

11:10 am – 12:10 am

Startup Battlefield Competition – Flight #2
TechCrunch’s iconic startup competition is here and for the first time in MENA, as entrepreneurs from around the region pitch expert judges and vie for US$25,000 no-equity cash prize and a trip for two to compete in the Startup Battlefield at TechCrunch Disrupt in 2019.

12:10 pm – 1:10 pm

BREAK
12:15 pm – 1:15 pm

Workshop: Automated Driving Mobility in MENA with Mandali Khalesi (Toyota)
Toyota’s Global Head of Automated Driving Mobility and Innovation will share Toyota’s latest automated driving research findings and its plans for the future. There will be 30 minutes set aside for consultation, where the audience will have the opportunity to advise Toyota on both how it should go about developing automated driving mobility for MENA, as well as how best to work together with entrepreneurs in the region.

1:15 pm – 1:40 pm

Lessons 10 Years On with Omar Gabr (Instabug), Nour Al Hassan (Tarjama), Mai Medhat (Eventtus) and Ameer Sherif (Wuzzuf) – Moderated by Editor at Large Mike Butcher
Ten years ago the Middle East and North Africa’s tech ecosystem was worth perhaps tens of millions of dollars. Today it’s in the hundreds of millions, and beyond. A decade ago the societal landscape was very different from today. Let’s discuss the huge changes that have happened and challenges and opportunities ahead.

1:40 pm – 2:40 pm

Startup Battlefield Competition – Flight #3
TechCrunch’s iconic startup competition is here and for the first time in MENA, as entrepreneurs from around the region pitch expert judges and vie for US$25,000 no-equity cash prize and a trip for two to compete in the Startup Battlefield at TechCrunch Disrupt in 2019.

2:40 pm – 3:00 pm

Fireside Chat with Magnus Olsson (Careem) – Moderated by Managing Editor Matt Burns
How do you scale a big startup in MENA? We hear from Magnus Olsson, founder and Managing Director of ride-hailing giant Careem on how they joined the unicorn club with Lyft and Uber.

3:00 pm – 3:25 pm

Where Will the Exits Come From with Henri Asseliy (Leap Ventures), Priscilla Elora Sharuk (Myki), and Kenza Lahlou (Outlierz Ventures) – Moderated by News Editor Ingrid Lunden
Both VCs and startups in MENA alike are furiously building the companies of the future. But you can’t have a startup without an acquisition or IPO, so where are they going to come from? We’ll hear from both the founder and investor perspectives.

3:25 pm – 4:40 pm

Startup Battlefield Competition – Final Round
TechCrunch’s iconic startup competition is here and for the first time in MENA, as entrepreneurs from around the region pitch expert judges and vie for US$25,000 no-equity cash prize and a trip for two to compete in the Startup Battlefield at TechCrunch Disrupt in 2019.

4:40 pm – 4:55 pm

BREAK
4:55 pm – 5:20 pm

MENA Content Plays with Paul Chucrallah (BeryTech Fund), Hussam Hammo (Tamatem) and Rami Al Qawasmi (Mawdoo3) – Moderated by News Editor Ingrid Lunden
A little-known fact about the MENA market is the sheer lack of Arabic language content online for consumers, whether it be media, music, games or events. Arabic-specific sites have appeared, tailor-made to the market. We’ll get the perspective of key entrepreneurs in this space.

5:20 pm – 5:35 pm

Startup Battlefield Closing Awards Ceremony
Watch the crowning of the latest winner of the Startup Battlefield

Mark Zuckerberg shares the first projects he ever coded

TwitterFacebook

The key to being first-to-market? Working to create products that service the public by listening to their needs. Facebook CEO Mark Zuckerberg has always been working to get his products to the public as soon as possible. This episode is narrated by Masters of Scale Host Reid Hoffman (LinkedIn Cofounder, Greylock investor).

This editorial series is created by Mashable & Masters of Scale and sponsored by Skillshare, the online learning community. Get 2 months of Skillshare classes for free by visiting this link → http://skillshare.com/masters Read more…

More about Tech, Facebook, Masters Of Scale, Tech, and Big Tech Companies

Hate speech, collusion, and the constitution

Half an hour into their two-hour testimony on Wednesday before the Senate Intelligence Committee, Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey were asked about collaboration between social media companies. “Our collaboration has greatly increased,” Sandberg stated before turning to Dorsey and adding that Facebook has “always shared information with other companies.” Dorsey nodded in response, and noted for his part that he’s very open to establishing “a regular cadence with our industry peers.”

Social media companies have established extensive policies on what constitutes “hate speech” on their platforms. But discrepancies between these policies open the possibility for propagators of hate to game the platforms and still get their vitriol out to a large audience. Collaboration of the kind Sandberg and Dorsey discussed can lead to a more consistent approach to hate speech that will prevent the gaming of platforms’ policies.

But collaboration between competitors as dominant as Facebook and Twitter are in social media poses an important question: would antitrust or other laws make their coordination illegal?

The short answer is no. Facebook and Twitter are private companies that get to decide what user content stays and what gets deleted off of their platforms. When users sign up for these free services, they agree to abide by their terms. Neither company is under a First Amendment obligation to keep speech up. Nor can it be said that collaboration on platform safety policies amounts to collusion.

This could change based on an investigation into speech policing on social media platforms being considered by the Justice Department. But it’s extremely unlikely that Congress would end up regulating what platforms delete or keep online – not least because it may violate the First Amendment rights of the platforms themselves.

What is hate speech anyway?

Trying to find a universal definition for hate speech would be a fool’s errand, but in the context of private companies hosting user generated content, hate speech for social platforms is what they say is hate speech.

Facebook’s 26-page Community Standards include a whole section on how Facebook defines hate speech. For Facebook, hate speech is “anything that directly attacks people based on . . . their ‘protected characteristics’ — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.” While that might be vague, Facebook then goes on to give specific examples of what would and wouldn’t amount to hate speech, all while making clear that there are cases – depending on the context – where speech will still be tolerated if, for example, it’s intended to raise awareness.

Twitter uses a “hateful conduct” prohibition which they define as promoting “violence against or directly attacking or threatening other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” They also prohibit hateful imagery and display names, meaning it’s not just what you tweet but what you also display on your profile page that can count against you.

Both companies constantly reiterate and supplement their definitions, as new test cases arise and as words take on new meaning. For example, the two common slang words to describe Ukrainians by Russians and Russians by Ukrainians was determined to be hate speech after war erupted in Eastern Ukraine in 2014. An internal review by Facebook found that what used to be common slang had turned into derogatory, hateful language.

Would collaboration on hate speech amount to anticompetitive collusion?

Under U.S. antitrust laws, companies cannot collude to make anticompetitive agreements or try to monopolize a market. A company which becomes a monopoly by having a superior product in the marketplace doesn’t violate antitrust laws. What does violate the law is dominant companies making an agreement – usually in secret – to deceive or mislead competitors or consumers. Examples include price fixing, restricting new market entrants, or misrepresenting the independence of the relationship between competitors.

A Pew survey found that 68% of Americans use Facebook. According to Facebook’s own records, the platform had a whopping 1.47 billion daily active users on average for the month of June and 2.23 billion monthly active users as of the end of June – with over 200 million in the US alone. While Twitter doesn’t disclose its number of daily users, it does publish the number of monthly active users which stood at 330 million at last count, 69 million of which are in the U.S.

There can be no question that Facebook and Twitter are overwhelmingly dominant in the social media market. That kind of dominance has led to calls for breaking up these giants under antitrust laws.

Would those calls hold more credence if the two social giants began coordinating their policies on hate speech?

The answer is probably not, but it does depend on exactly how they coordinated. Social media companies like Facebook, Twitter, and Snapchat have grown large internal product policy teams that decide the rules for using their platforms, including on hate speech. If these teams were to get together behind closed doors and coordinate policies and enforcement in a way that would preclude smaller competitors from being able to enter the market, then antitrust regulators may get involved.

Antitrust would also come into play if, for example, Facebook and Twitter got together and decided to charge twice as much for advertising that includes hate speech (an obviously absurd scenario) – in other words, using their market power to affect pricing of certain types of speech that advertisers use.

In fact, coordination around hate speech may reduce anti-competitive concerns. Given the high user engagement around hate speech, banning it could lead to reduced profits for the two companies and provide an opening to upstart competitors.

Sandberg and Dorsey’s testimony Wednesday didn’t point to executives hell-bent on keeping competition out through collaboration. Rather, their potential collaboration is probably better seen as an industry deciding on “best practices,” a common occurrence in other industries including those with dominant market players.

What about the First Amendment?

Private companies are not subject to the First Amendment. The Constitution applies to the government, not to corporations. A private company, no matter its size, can ignore your right to free speech.

That’s why Facebook and Twitter already can and do delete posts that contravene their policies. Calling for the extermination of all immigrants, referring to Africans as coming from shithole countries, and even anti-gay protests at military funerals may be protected in public spaces, but social media companies get to decide whether they’ll allow any of that on their platforms. As Harvard Law School’s Noah Feldman has stated, “There’s no right to free speech on Twitter. The only rule is that Twitter Inc. gets to decide who speaks and listens–which is its right under the First Amendment.”

Instead, when it comes to social media and the First Amendment, courts have been more focused on not allowing the government to keep citizens off of social media. Just last year, the U.S. Supreme Court struck down a North Carolina law that made it a crime for a registered sex offender to access social media if children use that platform. During the hearing, judges asked the government probing questions about the rights of citizens to free speech on social media from Facebook, to Snapchat, to Twitter and even LinkedIn.

Justice Ruth Bader Ginsburg made clear during the hearing that restricting access to social media would mean “being cut off from a very large part of the marketplace of ideas [a]nd [that] the First Amendment includes not only the right to speak, but the right to receive information.”

The Court ended up deciding that the law violated the fundamental First Amendment principle that “all persons have access to places where they can speak and listen,” noting that social media has become one of the most important forums for expression of our day.

Lower courts have also ruled that public officials who block users off their profiles are violating the First Amendment rights of those users. Judge Naomi Reice Buchwald, of the Southern District of New York, decided in May that Trump’s Twitter feed is a public forum. As a result, she ruled that when Trump blocks citizens from viewing and replying to his posts, he violates their First Amendment rights.

The First Amendment doesn’t mean Facebook and Twitter are under any obligation to keep up whatever you post, but it does mean that the government can’t just ban you from accessing your Facebook or Twitter accounts – and probably can’t block you off of their own public accounts either.

Collaboration is Coming?

Sandberg made clear in her testimony on Wednesday that collaboration is already happening when it comes to keeping bad actors off of platforms. “We [already] get tips from each other. The faster we collaborate, the faster we share these tips with each other, the stronger our collective defenses will be.”

Dorsey for his part stressed that keeping bad actors off of social media “is not something we want to compete on.” Twitter is here “to contribute to a healthy public square, not compete to have the only one, we know that’s the only way our business thrives and helps us all defend against these new threats.”

He even went further. When it comes to the drafting of their policies, beyond collaborating with Facebook, he said he would be open to a public consultation. “We have real openness to this. . . . We have an opportunity to create more transparency with an eye to more accountability but also a more open way of working – a way of working for instance that allows for a review period by the public about how we think about our policies.”

I’ve already argued why tech firms should collaborate on hate speech policies, the question that remains is if that would be legal. The First Amendment does not apply to social media companies. Antitrust laws don’t seem to stand in their way either. And based on how Senator Burr, Chairman of the Senate Select Committee on Intelligence, chose to close the hearing, government seems supportive of social media companies collaborating. Addressing Sandberg and Dorsey, he said, “I would ask both of you. If there are any rules, such as any antitrust, FTC, regulations or guidelines that are obstacles to collaboration between you, I hope you’ll submit for the record where those obstacles are so we can look at the appropriate steps we can take as a committee to open those avenues up.”

How Silicon Valley should celebrate Labor Day

Ask any 25-year old engineer what Labor Day means to him or her, and you might get an answer like: it’s the surprise three-day weekend after a summer of vacationing. Or it’s the day everyone barbecues at Dolores Park. Or it’s the annual Tahoe trip where everyone gets to relive college.

Or simply, it’s the day we get off because we all work so hard.

And while founders and employees in startup land certainly work hard, wearing their 80-hour workweeks as a badge of honor, closing deals on conference calls in an air-conditioned WeWork is a far cry from the backbreaking working conditions of the 1880s, the era when Labor Day was born.

For everyone here in Silicon Valley, we should not be celebrating this holiday triumphantly over beers and hot dogs, complacent in the belief that our gravest labor issues are behind us, but instead use this holiday as a moment to reflect on how much further we have to go in making our workplaces and companies more equitable, diverse, inclusive and ethically responsible.

Bloody Beginnings

On September 5th, 1882, 10,000 workers gathered at a “monster labor festival” to protest the 12-hours per day, seven days a week harsh working conditions they faced in order to cobble together a survivable wage. Even children as “young as 5 or 6 toiled in mills, factories and mines across the country.”

This all erupted in a climax in 1894 when the American Railway Union went on a nationwide strike, crippling the nation’s transportation infrastructure, which included trains that delivered postal mail. President Grover Cleveland declared this a federal crime and sent in federal troops to break up the strike, which resulted in one of the bloodiest encounters in labor history, leaving 30 dead and countless injured.

Labor Day was declared a national holiday a few month later in an effort to mend wounds and make peace with a reeling and restless workforce (it also conveniently coincided with President Cleveland’s reelection bid).

The Battle is Not Yet Won

Today in Silicon Valley, this battle for fair working conditions and a living wage seems distant from our reality of nap rooms and lucrative stock grants.  By all accounts, we have made tremendous strides on a number of critical labor issues. While working long hours is still a cause for concern, most of us can admit that we often voluntarily choose to work more than we have to. Our workplace environments are not perfect (i.e. our standing desks may not be perfectly ergonomic), but they are far from life-threatening or hazardous to our health. And while equal wages are still a concern, earning a living wage is not, particularly if the worst case scenario after “failing” at a startup means joining a tech titan and clocking in as a middle manager with a six-figure salary.

Even though the workplace challenges of today are not as grave as life or death, the fight is not yet over. Our workplaces are far from perfect, and the power dynamic between companies and employees is far from equal.

In tech, we face a myriad of issues that need grassroots, employee-driven movements to effect change. Each of the following issues has complexities and nuances that deserve an article of its own, but I’ve tried to summarize them briefly: 

  1. Equal pay for equal work – while gender wage gaps are better in tech than other industries (4% average in tech vs. 20% average across other industries), the discrepancy in wages for women in technical roles is twice the average for other roles in tech.
  2. Diversity – research shows that diverse teams perform better, yet 76% of technical jobs are still held by men, and only 5% of tech workers are Black or Latino. The more alarming statistic in a recent Atlassian survey is that more than 40% of respondents felt that their company’s diversity programs needed no further improvement.
  3. Inclusion – an inclusive workplace should be a basic fundamental right, but harassment and discrimination still exist. A survey by Women Who Tech found that 53 percent of women working in tech companies reported experiencing harassment (most frequently in the form of sexism, offensive slurs, and sexual harassment) compared to 16 percent of men.
  4. Outsourced / 1099 employees – while corporate employees at companies like Amazon are enjoying the benefits of a ballooning stock, the reality is much bleaker for warehouse workers who are on the fringes of the corporate empire. A new book by undercover journalist James Bloodworth found that Amazon workers in a UK warehouse “use bottles instead of the actual toilet, which is located too far away.” A separate survey conducted found that 55% of these workers suffer from depression, and 80% said they would not work at Amazon again.Similarly, Foxconn is under fire once again for unfair pay practices, adding to the growing list of concerns including suicide, underage workers, and onsite accidents. The company is the largest electronics manufacturer in the world, and builds products for Amazon, Apple, and a host of other tech companies.
  5. Corporate Citizenship & Ethics – while Silicon Valley may be a bubble, the products created here are not. As we’ve seen with Facebook and the Cambridge Analytica breach, these products impact millions of lives. The general uncertainty and uneasiness around the implications of automation and AI also spark difficult conversations about job displacement for entire swaths of the global population (22.7M by 2025 in the US alone, according to Forrester).

Thus, the reversal in sentiment against Silicon Valley this past year is sending a message that should resonate loud and clear — the products we build and the industries we disrupt here in the Valley have real consequences for workers that need to be taken seriously.

Laboring toward a better future

To solve these problems, employees in Silicon Valley needs to find a way to organize. However, there are many reasons why traditional union structures may not be the answer.

The first is simply that traditional unions and tech don’t get along. Specifically, the AFL-CIO, one of the largest unions in America, has taken a hard stance against the libertarian ethos of the Valley, drawing a bright line dividing the tech elite from the working class. In a recent speech about how technology is changing work, the President of the AFL-CIO did not mince words when he said that the “events of the last few years should have made clear that the alternative to a just society is not the libertarian paradise of Silicon Valley billionaires. It is a racist and authoritarian nightmare.”

But perhaps the biggest difference between what an organized labor movement would look like in Silicon Valley and that of traditional organized labor is that it would be a fight not to advance the interest of the majority, but to protect the minority. In the 1880s, poor working conditions and substandard pay affected nearly everyone — men, women, and children. Unions were the vehicles of change for the majority.

But today, for the average male 25-year old engineer, promoting diversity and inclusion or speaking out about improper treatment of offshore employees is unlikely to affect his pay, desirability in the job market, or working conditions. He will still enjoy the privileges of being fawned over as a scarce resource in a competitive job market. But the person delivering the on-demand service he’s building won’t. His female coworker with an oppressive boss won’t. This is why it is ever more important that we wake up and not only become allies or partners, but champions of the causes that affect our less-privileged fellow coworkers, and the people that our companies and products touch.

So this Labor Day, enjoy your beer and hot dog, but take a moment to remember the individuals who fought and bled on this day to bring about a better workplace for all. And on Tuesday, be ready to challenge your coworkers on how we can continue that fight to build more diverse, inclusive, and ethically responsible companies for the future. 

Twitter puts Infowars’ Alex Jones in the ‘read-only’ sin bin for 7 days

Twitter has finally taken action against Infowars creator Alex Jones, but it isn’t what you might think.

While Apple, Facebook, Google/YouTube, Spotify and many others have removed Jones and his conspiracy-peddling organization Infowars from their platforms, Twitter has remained unmoved with its claim that Jones hasn’t violated rules on its platform.

That was helped in no small way by the mysterious removal of some tweets last week, but now Jones has been found to have violated Twitter’s rules, as CNET first noted.

Twitter is punishing Jones for a tweet that violates its community standards but it isn’t locking him out forever. Instead, a spokesperson for the company confirmed that Jones’ account is in “read-only mode” for up to seven days.

That means he will still be able to use the service and look up content via his account, but he’ll be unable to engage with it. That means no tweets, likes, retweets, comments, etc. He’s also been ordered to delete the offending tweet — more on that below — in order to qualify for a fully functioning account again.

That restoration doesn’t happen immediately, though. Twitter policy states that the read-only sin bin can last for up to seven days “depending on the nature of the violation.” We’re imagining Jones got the full one-week penalty, but we’re waiting on Twitter to confirm that.

The offending tweet in question is a link to a story claiming President “Trump must take action against web censorship.” It looks like the tweet has already been deleted, but not before Twitter judged that it violates its policy on abuse:

Abuse: You may not engage in the targeted harassment of someone, or incite other people to do so. We consider abusive behavior an attempt to harass, intimidate, or silence someone else’s voice.

When you consider the things Infowars and Jones have said or written — 9/11 conspiracies, harassment of Sandy Hook victim families and more — the content in question seems fairly innocuous. Indeed, you could look at President Trump’s tweets and find seemingly more punishable content without much difficulty.

But here we are.

The weirdest part of this Twitter caning is one of the reference points that the company gave to media. These days, it is common for the company to point reporters to specific tweets that it believes encapsulate its position on an issue, or provide additional color in certain situations.

In this case, Twitter pointed us — and presumably other reporters — to this tweet from Infowars’ Paul Joseph Watson:

Alex Jones has been suspended by Twitter for 7 days for a video talking about social media censorship. Truly, monumentally, beyond stupid. 😄

On the same day that the Infowars website was brought down by a cyber attack.

Will this madness ever end? pic.twitter.com/hXDzH2b7rT

— Paul Joseph Watson (@PrisonPlanet) August 14, 2018

WTF, Twitter…

Facebook is the recruiting tool of choice for far-right group the Proud Boys

Twitter may have suspended the Proud Boys and their controversial leader Gavin McInnes, but it was never their platform of choice.

The Proud Boys, a self described “Western chauvinist” organization that often flirts with more hard-line groups of the far right, runs an elaborate network of recruiting pages on Facebook to attract and initiate members. While McInnes maintained a presence on many platforms, Facebook is the heart of the group’s operations. It’s there that the Proud Boys boast more than 35 regional and city-specific groups that act as landing pages for vetting thousands of new members and feeding them into local chapters.

When it comes to skirting the outer boundaries of social acceptability, McInnes could teach a master class. The Vice founder and Canadian citizen launched his newest project in 2016, capturing a groundswell of public political activity on the far right and launching the Proud Boys, a men’s club allied around the mantra “West is best,” its dedication to Trump and a prohibition against flip-flops and porn.

Facebook recruiting

The group makes national headlines for its involvement in violent dust-ups between the far right and far left and has a robust recruitment network centered on initiating members through Facebook groups. As for where it fits into the far right’s many sub-factions, McInnes objects to the term alt-light, sometimes used to describe far right group that oppose some mainstream conservative ideals but don’t openly endorse white nationalism. “Alt Light is a gay term that sounds like a diet soda in bed w Alt Right,” he said on Twitter last year. “We’re “The New Right.”

To that end, most regional affiliate pages run a message outlining some ground rules, including a declaration that its members not be racist or homophobic — a useful disclaimer for making the group more palatable than many of its less clever peers.

The Proud Boys’ agenda is less explicitly race-based than many groups it has affiliations with, espousing instead a broad sort of antagonism to perceived enemies on the political left and a credo of “western chauvinism.” The language is cleaned up, but it’s one degree removed from less palatable figures, including Unite the Right leader Jason Kessler. McInnes hosted Kessler on his own talk show just days after Kessler led the Charlottesville rally that left counter-protester Heather Heyer dead. In the segment, McInnes tried to create space between Kessler and the Proud Boys, though it wasn’t Kessler’s first time on the show or his only affiliation with the Proud Boys.

The Proud Boys also coordinates with the Vancouver, Washington-based group known as Patriot Prayer, another fairly social media-savvy far right organization that doesn’t openly endorse explicitly white nationalist groups, but still welcomes them into the fold during demonstrations that often turn violent.

Who are the Proud Boys?

Like much of the young, internet-fluent alt-right, the Proud Boys intentionally don’t take themselves too seriously, a strategy that conveniently opens the door for them to denounce any kind of controversy that might arise. They show up to protests wearing black and gold Fred Perry polo shirts, have a whole charter’s worth of inside jokes and in general seem a bit more media and internet savvy than hardline white nationalist groups, some of which Facebook has managed to clear out in the last year.

Unlike some less strategic and internet-savvy portions of the far right, McInnes and his Proud Boys are careful not to openly encourage preemptive violence. Still, the Proud Boys do encourage retaliatory violence, going so far as to enshrine physical altercations in its organizational hierarchy.

To earn their “first degree,” Proud Boys must openly declare their allegiance to the group’s ideals, usually in a Facebook vetting group.

To earn the second, they have to get beaten up by other members while naming five breakfast cereals (maybe a loose tie-in to the group’s mantra against masturbation). To earn the third degree they have to get a Proud Boys tattoo. The fourth degree is reserved for members who get in a brawl sufficient for the honor:

“You can’t plan getting a fourth degree. Its a consolation prize for engaging in a major conflict for the cause. Being arrested is not encouraged, although those who are immediately become fourth degree because the court has registered a major conflict. Serious physical fights also count and it’s up to each chapter to decide how serious the conflict must be to determine a fourth degree.”

That’s where the Proud Boys Facebook network comes in. To get accepted into a local chapter, prospective members join specific vetting groups and are asked to upload a video of them meeting their “first degree” requirements:

“Once you are added here, to be properly vetted you must upload and post a video of yourself reciting our First Degree. This is just a quick video of you saying EXACTLY THIS:

“My name is [full name], I’m from [city, state], and I am a western chauvinist who refuses to apologize for creating the modern world.” You can add anything else you’d like to your video, as long as you say those words exactly.

YouTube is full of first and second degree videos depicting the usually short half-ironic hazing ceremonies.

Facebook also hosts pages dedicated to the Fraternal Order of the Alt-Knights, a new-ish subdivision of the Proud Boys and its paramilitary wing. The Alt-Knights, also known as FOAK, are led by Kyle Chapman, a.k.a. “Based Stickman,” a far right figure who grew to fame after beating political enemies with a stick at a 2017 Berkeley protest. The Alt-Knights aren’t always quite as careful to denounce violence.

Whether the Proud Boys are in violation of Facebook’s unevenly enforced and sometimes secretive policies or not, the organization is making the most of its time on the platform. Facebook has rules against organizing harm or credible violence that the Proud Boys’ brawling ethos and alt-knights would seem to run afoul of, but the group stands by the useful mantra “We don’t start fights, we finish them.”

TechCrunch reached out to the Proud Boys to get an idea of their membership numbers and will update this story if we receive a reply. An analysis of affiliated pages shows that Proud Boys groups have added hundreds of members in the last 30 days across many chapters.

With a second Unite the Right rally around the corner and the ugly reality of more real-life violence organized on social media looming large, platforms are on their toes for once. Facebook has cleaned up some of the rampant racism that stemmed from the extreme right presence on its platform, but savvier, self-censoring groups like the Proud Boys are likely to be the real headache as Facebook, Twitter and Google trudge through an endless minefield of case-by-case terms of service violations, drawing sharp criticism from both sides of the political spectrum no matter where they choose to place their feet.

Apple has removed Infowars podcasts from iTunes

Apple has followed the lead of Google and Facebook after it removed Infowars, the conspiracy theorist organization helmed by Alex Jones, from its iTunes and podcasts apps.

Unlike Google and Facebook, which removed four Infowars videos on the basis that the content violated its policies, Apple’s action is wider-reaching. The company has withdrawn all episodes of five of Infowars’ six podcasts from its directory of content, leaving just one left, a show called ‘Real News With David Knight.’

The removals were first spotted on Twitter. Later, Apple confirmed it took action on account of the use of hate speech which violates its content guidelines.

“Apple does not tolerate hate speech, and we have clear guidelines that creators and developers must follow to ensure we provide a safe environment for all of our users. Podcasts that violate these guidelines are removed from our directory making them no longer searchable or available for download or streaming. We believe in representing a wide range of views, so long as people are respectful to those with differing opinions,” a spokesperson told TechCrunch.

Apple’s action comes after fellow streaming services Spotify and Stitcher removed Infowars on account of its use of hate speech.

Jones has used Infowars, and by association the platforms of these media companies, to broadcast a range of conspiracy theories which have included claims 9/11 was an inside job and alternate theories to the San Bernardino shootings. In the case of another U.S. mass shooting, Sandy Hook, Jones and Infowars’ peddling of false information and hoax theories was so severe that some of the families of the deceased, who have been harassed online and faced death threats, have been forced to move multiple times. A group is suing Jones via a defamation suit.

Facebook cuts off access to user data for ‘hundreds of thousands’ of apps

TwitterFacebook

Facebook has just blocked a truckload of apps from accessing its user’s data.

Facebook’s VP of Product Partnerships, Ime Archibong, explained in a blog post Tuesday that Facebook had cut off API access for “hundreds of thousands of inactive apps that have not submitted for our app review process.” That’s a lot of random, dormant apps that had access.

The social media giant, which was once very open to developers until the whole Cambridge Analytica thing, announced in May during F8 that it was tightening up the review process for apps.

More about Tech, Facebook, Apps, Data, and Data Collection

Facebook’s new AI research is a real eye-opener

There are plenty of ways to manipulate photos to make you look better, remove red eye or lens flare, and so on. But so far the blink has proven a tenacious opponent of good snapshots. That may change with research from Facebook that replaces closed eyes with open ones in a remarkably convincing manner.

It’s far from the only example of intelligent “in-painting,” as the technique is called when a program fills in a space with what it thinks belongs there. Adobe in particular has made good use of it with its “context-aware fill,” allowing users to seamlessly replace undesired features, for example a protruding branch or a cloud, with a pretty good guess at what would be there if it weren’t.

But some features are beyond the tools’ capacity to replace, one of which is eyes. Their detailed and highly variable nature make it particularly difficult for a system to change or create them realistically.

Facebook, which probably has more pictures of people blinking than any other entity in history, decided to take a crack at this problem.

It does so with a Generative Adversarial Network, essentially a machine learning system that tries to fool itself into thinking its creations are real. In a GAN, one part of the system learns to recognize, say, faces, and another part of the system repeatedly creates images that, based on feedback from the recognition part, gradually grow in realism.

From left to right: “Exemplar” images, source images, Photoshop’s eye-opening algorithm, and Facebook’s method.

In this case the network is trained to both recognize and replicate convincing open eyes. This could be done already, but as you can see in the examples at right, existing methods left something to be desired. They seem to paste in the eyes of the people without much consideration for consistency with the rest of the image.

Machines are naive that way: they have no intuitive understanding that opening one’s eyes does not also change the color of the skin around them. (For that matter, they have no intuitive understanding of eyes, color, or anything at all.)

What Facebook’s researchers did was to include “exemplar” data showing the target person with their eyes open, from which the GAN learns not just what eyes should go on the person, but how the eyes of this particular person are shaped, colored, and so on.

The results are quite realistic: there’s no color mismatch or obvious stitching because the recognition part of the network knows that that’s not how the person looks.

In testing, people mistook the fake eyes-opened photos for real ones, or said they couldn’t be sure which was which, more than half the time. And unless I knew a photo was definitely tampered with, I probably wouldn’t notice if I was scrolling past it in my newsfeed. Gandhi looks a little weird, though.

It still fails in some situations, creating weird artifacts if a person’s eye is partially covered by a lock of hair, or sometimes failing to recreate the color correctly. But those are fixable problems.

You can imagine the usefulness of an automatic eye-opening utility on Facebook that checks a person’s other photos and uses them as reference to replace a blink in the latest one. It would be a little creepy, but that’s pretty standard for Facebook, and at least it might save a group photo or two.

‘The Onion’ promises it won’t stop trolling Facebook and Mark Zuckerberg

TwitterFacebook

Facebook CEO Mark Zuckerberg is getting a taste of what happens when you piss off The Onion. 

The satirical news site has been relentlessly trolling Zuckerberg and Facebook for the past few days and promises it’s only getting started.

While the satirical site is known for lampooning just about anyone and everyone in the public eye, the publication has been relentlessly trolling Facebook, more so than usual. Four anti-Facebook posts were pinned to the top of its homepage for much of the day Friday, three of which mention Zuckerberg by name or feature his photo. Read more…

More about Tech, Facebook, Media, Mark Zuckerberg, and Social Media Companies

Facebook launches gaming video hub to take on Twitch

TwitterFacebook

Facebook is going after those eyeballs on Twitch.

The social network has launched fb.gg, a hub which makes it easier for people to find gaming content that’s been streamed on the platform.

Front and centre in the hub are primarily popular titles such as Fortnite, PUBG and FIFA 18, as well as a selection of recommended streams. 

If you’re already following a streamer, they’ll appear on the sidebar, and you can also view streams that your friends on Facebook have recently watched too.

Image: facebook

Facebook is also making its monetisation scheme a fixture in its Level Up Program, which it trialled earlier this year. Read more…

More about Facebook, Gaming, Twitch, Social Media Companies, and Tech

It’s OK to leave Facebook

The slow-motion privacy train wreck that is Facebook has many users, perhaps you, thinking about leaving or at least changing the way you use the social network. Fortunately for everyone but Mark Zuckerberg, it’s not nearly has hard to leave as it once was. The main thing to remember is that social media is for you to use, and not vice versa.

Social media has now become such an ordinary part of modern life that, rather than have it define our interactions, we can choose how we engage with it. That’s great! It means that everyone is free to design their own experience, taking from it what they need instead of participating to an extent dictated by social norms or the progress of technology.

Here’s why now is a better time than ever to take control of your social media experience. I’m going to focus on Facebook, but much of this is applicable to Instagram, Twitter, LinkedIn, and other networks as well.

Stalled innovation means a stable product

The Facebooks of 2005, 2010, and 2015 were very different things and existed in very different environments. Among other things over that eventful ten-year period, mobile and fixed broadband exploded in capabilities and popularity; the modern world of web-native platforms matured and became secure and reliable; phones went from dumb to smart to, for many, their primary computer; and internet-based companies like Google, Facebook, and Amazon graduated from niche players to embrace and dominate the world at large.

It’s been a transformative period for lots of reasons and in lots of ways. And products and services that have been there the whole time have been transformed almost continuously. You’d probably be surprised at what they looked like and how limited they were not long ago. Many things we take for granted today online were invented and popularized just in the last decade.

But the last few years have seen drastically diminished returns. Where Facebook used to add features regularly that made you rely on it more and more, now it is desperately working to find ways to keep people online. Why is that?

Well, we just sort of reached the limit of what a platform like Facebook can or should do, that’s all! Nothing wrong with that.

It’s like improving a car — no matter how many features you add or engines you swap in, it’ll always be a car. Cars are useful things, and so is Facebook. But a car isn’t a truck, or a bike, or an apple, and Facebook isn’t (for example) a broadcast medium, a place for building strong connections, or a VR platform (as hard as they’re trying).

The things that Facebook does well and that we have all found so useful — sharing news and photos with friends, organizing events, getting and staying in contact with people — haven’t changed considerably in a long time. And as the novelty has worn off those things, we naturally engage in them less frequently and in ways that make more sense to us.

Facebook has become the platform it was intended to be all along, with its own strengths and weaknesses, and its failure to advance beyond that isn’t a bad thing. In fact, I think stability is a good thing. Once you know what something is and will be, you can make an informed choice about it.

The downsides have become obvious

Every technology has its naysayers, and social media was no exception — I was and to some extent remain one myself. But over the years of changes these platforms have gone through, some fears were shown to be unfounded or old-fashioned.

The idea that people would cease interacting in the “real world” and live in their devices has played out differently from how we expected, surely; trying to instruct the next generation on the proper way to communicate with each other has never worked out well for the olds. And if you told someone in 2007 that foreign election interference would be as much a worry for Facebook as oversharing and privacy problems, you might be met with incredulous looks.

Other downsides were for the most part unforeseen. The development of the bubble or echo chamber, for instance, would have been difficult to predict when our social media systems weren’t also our news-gathering systems. And the phenomenon of seeing only the highlights of others’ lives posted online, leading to self esteem issues in those who view them with envy, is an interesting but sad development.

Whether some risk inherent to social media was predicted or not, or proven or not, people now take such risks seriously. The ideas that one can spend too much time on social networks, or suffer deleterious effects from them, or feel real pain or turmoil because of interactions on them are accepted (though sadly not always without question).

Taking the downsides of something as seriously as the upsides is another indicator of the maturity of that thing, at least in terms of how society interacts with it. When the hype cycle winds down, realistic judgment takes its place and the full complexities of a relationship like the one between people and social media can be examined without interference.

Between the stability of social media’s capabilities and the realism with which those capabilities are now being considered, choice is no longer arbitrary or absolute. Your engagement is not being determined by them any more.

Social media has become a rich set of personal choices

Your experience may differ from mine here, but I feel that in those days of innovation among social networks your participation was more of a binary. You were either on or you were off.

The way they were advancing and changing defined how you engaged with them by adding and opting you into features, or changing layouts and algorithms. It was hard to really choose how to engage in any meaningful way when the sands were shifting under your feet (or rather, fingertips). Every few months brought new features and toys and apps, and you sort of had to be there, using them as proscribed, or risk being left behind. So people either kept up or voluntarily stayed off.

Now all that has changed. The ground rules are set, and have been for long enough that there is no risk that if you left for a few months and come back, things would be drastically different.

As social networks have become stable tools used by billions, any combination or style of engagement with them has become inherently valid.

Your choice to engage with Facebook or Instagram does not boil down to simply whether you are on it or not any more, and the acceptance of social media as a platform for expression and creation as well as socializing means that however you use it or present on it is natural and no longer (for the most part) subject to judgment.

That extends from choosing to make it an indispensable tool in your everyday life to quitting and not engaging at all. There’s no longer an expectation that the former is how a person must use social media, and there is no longer a stigma to the latter of disconnectedness or Luddism.

You and I are different people. We live in different places, read different books, enjoy different music. We drive different cars, prefer different restaurants, like different drinks. Why should we be the same in anything as complex as how we use and present ourselves on social media?

It’s analogous, again, to a car: you can own one and use it every day for a commute, or use it rarely, or not have one at all — who would judge you? It has nothing to do with what cars are or aren’t, and everything to do with what a person wants or needs in the circumstances of their own life.

For instance, I made the choice to remove Facebook from my phone over a year ago. I’m happier and less distracted, and engage with it deliberately, on my terms, rather than it reaching out and engaging me. But I have friends who maintain and derive great value from their loose network of scattered acquaintances, and enjoy the immediacy of knowing and interacting with them on the scale of minutes or seconds. And I have friends who have never been drawn to the platform in the first place, content to select from the myriad other ways to stay in touch.

These are all perfectly good ways to use Facebook! Yet only a few years ago the zeitgeist around social media and its exaggerated role in everyday life — resulting from novelty for the most part — meant that to engage only sporadically would be more difficult, and to disengage entirely would be to miss out on a great deal (or fear that enough that quitting became fraught with anxiety). People would be surprised that you weren’t on Facebook and wonder how you got by.

Try it and be delighted

Social networks are here to improve your life the same way that cars, keyboards, search engines, cameras, coffee makers, and everything else are: by giving you the power to do something. But those networks and the companies behind them were also exerting power over you and over society in general, the way (for example) cars and car makers exerted power over society in the ’50s and ’60s, favoring highways over public transportation.

Some people and some places, more than others, are still subject to the influence of car makers — ever try getting around L.A. without one? And the same goes for social media — ever try planning a birthday party without it? But the last few years have helped weaken that influence and allow us to make meaningful choices for ourselves.

The networks aren’t going anywhere, so you can leave and come back. Social media doesn’t control your presence.

It isn’t all or nothing, so you can engage at 100 percent, or zero, or anywhere in between. Social media doesn’t decide how you use it.

You won’t miss anything important, because you decide what is important to you. Social media doesn’t share your priorities.

Your friends won’t mind, because they know different people need different things. Social media doesn’t care about you.

Give it a shot. Pick up your phone right now and delete Facebook. Why not? The absolute worst that will happen is you download it again tomorrow and you’re back where you started. But it could also be, as it was for me and has been for many people I’ve known, like shrugging off a weight you didn’t even realize you were bearing. Try it.

Facebook’s Oculus Venues streams its first VR concert. Was it any good?

TwitterFacebook

When Australian artist Vance Joy performed at Colorado’s Red Rocks Amphitheatre on Wednesday, there wasn’t a smartphone in sight. 

It wasn’t quite physically possible for audience members using Facebook’s Oculus Venues, a live VR concert experience that saw its debut run worldwide at 7:30 p.m. PST.

Announced in October last year, Oculus Venues, now available for Oculus Go and Samsung Gear VR, is a new feature that allows you to watch live events with your friends in VR. To kick things off, Facebook offered up a free concert by Australian artist Vance Joy broadcast live from the iconic Red Rocks venue on May 30. Read more…

More about Facebook, Music, Vr, Live Concert, and Concerts

Facebook didn’t see Cambridge Analytica breach coming because it was focused ‘on the old threat’

In light of the massive data scandal involving Cambridge Analytica around the 2016 U.S. presidential election, a lot of people wondered how something like that could’ve happened. Well, Facebook didn’t see it coming, Facebook COO Sheryl Sandberg said at the Code conference this evening.

“If you go back to 2016 and you think about what people were worried about in terms of nations, states or election security, it was largely spam and phishing hacking,” Sandberg said. “That’s what people were worried about.”

She referenced the Sony email hack and how Facebook didn’t have a lot of the problems other companies were having at the time. Unfortunately, while Facebook was focused on not screwing up in that area, “we didn’t see coming a different kind of more insidious threat,” Sandberg said.

Sandberg added, “We realized we didn’t see the new threat coming. We were focused on the old threat and now we understand that this is the kind of threat we have.”

Moving forward, Sandberg said, Facebook now understands the threat and that it’s better able to meet those threats leading in to future elections. On stage, Sandberg also said Facebook was not only late to discovering Cambridge Analytica’s unauthorized access to its data, but that Facebook still doesn’t know exactly what data Cambridge Analytica accessed. Facebook was in the midst of conducting its own audit when the U.K. government decided to conduct one of their own, therefore putting Facebook’s on hold.

“They didn’t have any data that we could’ve identified as ours,” Sandberg said. “To this day, we still don’t know what data Cambridge Analytica had.”

Facebook is updating how you can authenticate your account logins

You’ll soon have more options for staying secure on Facebook with two-factor authentication.

Facebook is simplifying the process for two-factor verification on its platform so you won’t have to give the company your phone number just to bring additional security to your device. The company announced today that it is adding support for third-party authentication apps like Duo Security and Google Authenticator while streamlining the setup process to make it easier to get moving with it in the first place.

Two-factor authentication is a pretty widely supported security strategy that adds another line of defense for users so they aren’t screwed if their login credentials are compromised. SMS isn’t generally considered the most secure method for 2FA because it’s possible for hackers to take control of your SIM and transfer it to a new phone through a process that relies heavily on social engineering, something that isn’t as much of a risk when using hardware-based authentication devices or third-party apps.

Back in March, Facebook CSO Alex Stamos notably apologized after users started complaining that Facebook was spamming them on the phone numbers with which they had signed up for two-factor authentication. They insisted that it won’t happen again, but it also definitely won’t if they don’t have your number to begin with.

The new functionality is available in the “Security and Login” tab in your Facebook settings.

Facebook launches Youth Portal to educate teens on the platform, how their data is being used

There’s probably an important gap in attention being paid at internet companies to young kids that are good targets for parental controls and older ones who are having to learn to use the internet in a responsible way on their own.

Today, Facebook is releasing a new Youth Portal that offers some guidance to teens on how to navigate the service, how to stay secure, while also helping them understand how their data is used. Facebook says that that they began showing tips for teens in the newsfeed earlier this month related to some of these topics.

While many of the sections in the portal are devoted to basic topics like how to unfriend or block someone, a bit of the information is structured in more of a journalistic format focused on helping Gen Z users start their internet usage off on the right foot in a way that older generations haven’t.

In a “Guiding Principles” section, the tips are structured after oft-quoted real world advice:

Think (for 5 seconds) before you speak

Before you post publicly, pause and ask yourself, “Would I feel comfortable reading this out loud to my parents and grandparents?” There will always be people at your school who are social media oversharers (and adults in your life who are, too). Resist the urge, ignore their noise and save the juicy details for your close friends only.

One of the more useful things it does is organize information related to Facebook’s data policy in a more accessible way that admittedly may not answer every single question but also doesn’t overwhelm young users who may just be looking for the basics. It generally aims to address stuff like what data Facebook collects and how they use that information.

At the end of the day, it’s just an information page. The Youth Portal won’t directly curb how Facebook approach cyber-bullying or abuse, but the hub does organize a lot of information that pops up on the site while you’re using it into a single place where someone can just blaze through it in a single go.

More importantly it’s just a nice resource for Facebook to refer younger users to when there’s an issue that’s more likely to get looked at then the Terms of Service-style help pages that generally hold this information.

The Youth Portal goes live today in 60 languages.

Facebook’s facial recognition feature could help find missing persons

TwitterFacebook

Facebook’s new facial recognition features makes some people uneasy, but the tool could help find missing people.

Australian organisation Missing Persons Action Network (MPAN) has launched a campaign called Invisible Friends, asking people to add the profiles of missing people as friends on the social media platform.

Facebook’s new facial recognition tools will automatically tag people in photos, even if they’re in the background. Users will be notified, and asked if they want to be tagged in the photos. 

These profiles of missing people, like Zac Barnes who disappeared in 2016, are actually run by MPAN. That means the organisation will receive a notification if the person is tagged by Facebook’s facial recognition feature.  Read more…

More about Facebook, Australia, Social Media, Missing, and Facial Recognition

Lol now Facebook is just making fake news smaller

TwitterFacebook

Facebook really wishes its problems would just disappear. But, since that’s clearly not going to happen, maybe they could, I don’t know, get smaller?

That appears to be the thinking of Mark Zuckerberg and Co., who on Friday announced that the company’s new plan to combat fake news essentially boils down to font size. 

So reports TechCrunch, which notes that Facebook’s latest grand idea is to reduce the amount of space articles take up in the News Feed if their accuracy has been disputed by the company’s third-party fact checkers.  Read more…

More about Facebook, Mark Zuckerberg, News Feed, Fake News, and Tech

Lol now Facebook is just making fake news smaller

TwitterFacebook

Facebook really wishes its problems would just disappear. But, since that’s clearly not going to happen, maybe they could, I don’t know, get smaller?

That appears to be the thinking of Mark Zuckerberg and Co., who on Friday announced that the company’s new plan to combat fake news essentially boils down to font size. 

So reports TechCrunch, which notes that Facebook’s latest grand idea is to reduce the amount of space articles take up in the News Feed if their accuracy has been disputed by the company’s third-party fact checkers.  Read more…

More about Facebook, Mark Zuckerberg, News Feed, Fake News, and Tech

Facebook has a new job posting calling for chip designers

Facebook has posted a job opening looking for an expert in ASIC and FPGA, two custom silicon designs that companies can gear toward specific use cases — particularly in machine learning and artificial intelligence.

There’s been a lot of speculation in the valley as to what Facebook’s interpretation of custom silicon might be, especially as it looks to optimize its machine learning tools — something that CEO Mark Zuckerberg referred to as a potential solution for identifying misinformation on Facebook using AI. The whispers of Facebook’s customized hardware range depending on who you talk to, but generally center around operating on the massive graph Facebook possesses around personal data. Most in the industry speculate that it’s being optimized for Caffe2, an AI infrastructure deployed at Facebook, that would help it tackle those kinds of complex problems.

FPGA is designed to be a more flexible and modular design, which is being championed by Intel as a way to offer the ability to adapt to a changing machine learning-driven landscape. The downside that’s commonly cited when referring to FPGA is that it is a niche piece of hardware that is complex to calibrate and modify, as well as expensive, making it less of a cover-all solution for machine learning projects. ASIC is similarly a customized piece of silicon that a company can gear toward something specific, like mining cryptocurrency.

Facebook’s director of AI research tweeted about the job posting this morning, noting that he previously worked in chip design:

Interested in designing ASIC & FPGA for AI?
Design engineer positions are available at Facebook in Menlo Park.

I used to be a chip designer many moons ago: my engineering diploma was in Electrical… https://t.co/D4l9kLpIlV

Yann LeCun (@ylecun) April 18, 2018

While the whispers grow louder and louder about Facebook’s potential hardware efforts, this does seem to serve as at least another partial data point that the company is looking to dive deep into custom hardware to deal with its AI problems. That would mostly exist on the server side, though Facebook is looking into other devices like a smart speaker. Given the immense amount of data Facebook has, it would make sense that the company would look into customized hardware rather than use off-the-shelf components like those from Nvidia.

(The wildest rumor we’ve heard about Facebook’s approach is that it’s a diurnal system, flipping between machine training and inference depending on the time of day and whether people are, well, asleep in that region.)

Most of the other large players have found themselves looking into their own customized hardware. Google has its TPU for its own operations, while Amazon is also reportedly working on chips for both training and inference. Apple, too, is reportedly working on its own silicon, which could potentially rip Intel out of its line of computers. Microsoft is also diving into FPGA as a potential approach for machine learning problems.

Still, that it’s looking into ASIC and FPGA does seem to be just that — dipping toes into the water for FPGA and ASIC. Nvidia has a lot of control over the AI space with its GPU technology, which it can optimize for popular AI frameworks like TensorFlow. And there are also a large number of very well-funded startups exploring customized AI hardware, including Cerebras Systems, SambaNova Systems, Mythic, and Graphcore (and that isn’t even getting into the large amount of activity coming out of China). So there are, to be sure, a lot of different interpretations as to what this looks like.

One significant problem Facebook may face is that this job opening may just sit up in perpetuity. Another common criticism of FPGA as a solution is that it is hard to find developers that specialize in FPGA. While these kinds of problems are becoming much more interesting, it’s not clear if this is more of an experiment than Facebook’s full all-in on custom hardware for its operations.

But nonetheless, this seems like more confirmation of Facebook’s custom hardware ambitions, and another piece of validation that Facebook’s data set is becoming so increasingly large that if it hopes to tackle complex AI problems like misinformation, it’s going to have to figure out how to create some kind of specialized hardware to actually deal with it.

A representative from Facebook did not yet return a request for comment.

Facebook gets even shadier, limits EU privacy law reach

TwitterFacebook

Facebook is quietly looking to limit the number of users that will be protected by Europe’s tough new data law, according to Reuters.

Outside of the U.S. and Canada, Facebook’s users agree to terms and conditions that are tied with the social media company’s operation in Ireland. 

So, as the EU’s General Data Protection Regulation (GDPR) is set to come into force on May 25, even non-EU users would have had their data protected by the law on Facebook.

But now, Facebook is reportedly looking to ensure that GDPR only applies to European users next month, affecting 1.5 billion users in Australia, Africa, the Middle East and in Asia. Read more…

More about Tech, Facebook, Privacy, Data, and Social Media

Minds aims to decentralize the social network

Decentralization is the buzzword du jour. Everything – from our currencies to our databases – are supposed to exist, immutably, in this strange new world. And Bill Ottman wants to add our social media to the mix.

Ottman, an intense young man with a passion to fix the world, is the founder of Minds.com, a New York-based startup that has been receiving waves of new users as zealots and the the not-so-zealous have been leaving other networks. In fact, Zuckerberg’s bad news is music to Ottman’s ears.

Ottman started Minds in 2011 “with the goal of bringing a free, open source and sustainable social network to the world,” he said. He and his CTO, Mark Harding, have worked in various non-profits including Code To Inspire, a group that teaches Afghani women to code. He said his vision is to get us out from under social media’s thumb.

“We started Minds in my basement after being disillusioned by user abuse on Facebook and other big tech services. We saw spying, data mining, algorithm manipulation, and no revenue sharing,” he said. “To us, it’s inevitable that an open source social network becomes dominant, as was the case with Wikipedia and proprietary encyclopedias.”

His efforts have paid off. The team now has over 1 million registered users and over 105,000 monthly active users. They are working on a number of initiatives, including an ICO, and the site makes money through “boosting” – essentially the ability to pay to have a piece of content float higher in the feed.

The company raised $350K in 2013 and then a little over a million dollars in a Reg CF Equity Crowdfunding raise.

Unlike Facebook, Minds is built on almost radical transparency. The code is entirely open source and it includes encrypted messenger services and optional anonymity for users. The goal, ultimately, is to have the data be decentralized and any user should be able to remove his or her data. It’s also non-partisan, a fact that Ottman emphasized.

“We are not pushing a political agenda, but are more concerned with transparency, Internet freedom and giving control back to the user,” he said. “It’s a sad state of affairs when every network that cares about free speech gets lumped in with extremists.”

He was disappointed, for example, when people read that Reddit’s choice to shut down toxic sub-Reddits was a success. It wasn’t, he said. Instead, those users just flocked to other, more permissive sites. However, he doesn’t think those sites have be cesspools of hate.

“We are a community-owned social network dedicated to transparency, privacy and rewarding people for their contributions. We are called Minds because it’s meant to be a representation of the network itself,” he said. “Our mission is Internet freedom with privacy, transparency, free speech within the law and user control. Additionally, we want to provide our users with revenue opportunity and the ability to truly expand their reach and earn rewards for their contributions to the network.”

RSS is undead

RSS died. Whether you blame Feedburner, or Google Reader, or Digg Reader last month, or any number of other product failures over the years, the humble protocol has managed to keep on trudging along despite all evidence that it is dead, dead, dead.

Now, with Facebook’s scandal over Cambridge Analytica, there is a whole new wave of commentators calling for RSS to be resuscitated. Brian Barrett at Wired said a week ago that “… anyone weary of black-box algorithms controlling what you see online at least has a respite, one that’s been there all along but has often gone ignored. Tired of Twitter? Facebook fatigued? It’s time to head back to RSS.”

Let’s be clear: RSS isn’t coming back alive so much as it is officially entering its undead phase.

Don’t get me wrong, I love RSS. At its core, it is a beautiful manifestation of some of the most visionary principles of the internet, namely transparency and openness. The protocol really is simple and human-readable. It feels like how the internet was originally designed with static, full-text articles in HTML. Perhaps most importantly, it is decentralized, with no power structure trying to stuff other content in front of your face.

It’s wonderfully idealistic, but the reality of RSS is that it lacks the features required by nearly every actor in the modern content ecosystem, and I would strongly suspect that its return is not forthcoming.

Now, it is important before diving in here to separate out RSS the protocol from RSS readers, the software that interprets that protocol. While some of the challenges facing this technology are reader-centric and therefore fixable with better product design, many of these challenges are ultimately problems with the underlying protocol itself.

Let’s start with users. I, as a journalist, love having hundreds of RSS feeds organized in chronological order allowing me to see every single news story published in my areas of interest. This use case though is a minuscule fraction of all users, who aren’t paid to report on the news comprehensively. Instead, users want personalization and prioritization — they want a feed or stream that shows them the most important content first, since they are busy and lack the time to digest enormous sums of content.

To get a flavor of this, try subscribing to the published headlines RSS feed of a major newspaper like the Washington Post, which publishes roughly 1,200 stories a day. Seriously, try it. It’s an exhausting experience wading through articles from the style and food sections just to run into the latest update on troop movements in the Middle East.

Some sites try to get around this by offering an array of RSS feeds built around keywords. Yet, stories are almost always assigned more than one keyword, and keyword selection can vary tremendously in quality across sites. Now, I see duplicate stories and still manage to miss other stories I wanted to see.

Ultimately, all of media is prioritization — every site, every newspaper, every broadcast has editors involved in determining what is the hierarchy of information to be presented to users. Somehow, RSS (at least in its current incarnation) never understood that. This is both a failure of the readers themselves, but also of the protocol, which never forced publishers to provide signals on what was most and least important.

Another enormous challenge is discovery and curation. How exactly do you find good RSS feeds? Once you have found them, how do you group and prune them over time to maximize signal? Curation is one of the biggest on-boarding challenges of social networks like Twitter and Reddit, which has prevented both from reaching the stratospheric numbers of Facebook. The cold start problem with RSS is perhaps its greatest failing today, although could potentially be solved by better RSS reader software without protocol changes.

RSS’ true failings though are on the publisher side, with the most obvious issue being analytics. RSS doesn’t allow publishers to track user behavior. It’s nearly impossible to get a sense of how many RSS subscribers there are, due to the way that RSS readers cache feeds. No one knows how much time someone reads an article, or whether they opened an article at all. In this way, RSS shares a similar product design problem with podcasting, in that user behavior is essentially a black box.

For some users, that lack of analytics is a privacy boon. The reality though is that the modern internet content economy is built around advertising, and while I push for subscriptions all the time, such an economy still looks very distant. Analytics increases revenues from advertising, and that means it is critical for companies to have those trackers in place if they want a chance to make it in the competitive media environment.

RSS also offers very few opportunities for branding content effectively. Given that the brand equity for media today is so important, losing your logo, colors, and fonts on an article is an effective way to kill enterprise value. This issue isn’t unique to RSS — it has affected Google’s AMP project as well as Facebook Instant Articles. Brands want users to know that the brand wrote something, and they aren’t going to use technologies that strip out what they consider to be a business critical part of their user experience.

These are just some of the product issues with RSS, and together they ensure that the protocol will never reach the ubiquity required to supplant centralized tech corporations. So, what are we to do then if we want a path away from Facebook’s hegemony?

I think the solution is a set of improvements. RSS as a protocol needs to be expanded so that it can offer more data around prioritization as well as other signals critical to making the technology more effective at the reader layer. This isn’t just about updating the protocol, but also about updating all of the content management systems that publish an RSS feed to take advantage of those features.

That leads to the most significant challenge — solving RSS as business model. There needs to be some sort of a commerce layer around feeds, so that there is an incentive to improve and optimize the RSS experience. I would gladly pay money for an Amazon Prime-like subscription where I can get unlimited text-only feeds from a bunch of a major news sources at a reasonable price. It would also allow me to get my privacy back to boot.

Next, RSS readers need to get a lot smarter about marketing and on-boarding. They need to actively guide users to find where the best content is, and help them curate their feeds with algorithms (with some settings so that users like me can turn it off). These apps could be written in such a way that the feeds are built using local machine learning models, to maximize privacy.

Do I think such a solution will become ubiquitous? No, I don’t, and certainly not in the decentralized way that many would hope for. I don’t think users actually, truly care about privacy (Facebook has been stealing it for years — has that stopped its growth at all?) and they certainly aren’t news junkies either. But with the right business model in place, there could be enough users to make such a renewed approach to streams viable for companies, and that is ultimately the critical ingredient you need to have for a fresh news economy to surface and for RSS to come back to life.

Australia also investigates Facebook following data scandal

TwitterFacebook

Facebook might be getting a “booting” Down Under.

The Office of the Australian Information Commissioner (OAIC) announced on Thursday it would open a formal investigation into the social media giant to see if it has breached Australia’s privacy laws. 

It follows news the personal information of 300,000 Australian Facebook users “may have been acquired and used without authorisation” as part of the Cambridge Analytica scandal that affected 87 million.

OAIC said it would work with foreign authorities on the investigation, “given the global nature of the matter.”  Read more…

More about Facebook, Australia, Privacy, Cambridge Analytica, and Tech

Highlights and audio from Zuckerberg’s emotional Q&A on scandals

“This is going to be a never-ending battle” said Mark Zuckerberg . He just gave the most candid look yet into his thoughts about Cambridge Analytica, data privacy, and Facebook’s sweeping developer platform changes today during a conference call with reporters. Sounding alternately vulnerable about his past negligence and confident about Facebook’s strategy going forward, Zuckerberg took nearly an hour of tough questions.

You can read a transcript here and listen to a recording of the call below:



The CEO started the call by giving his condolences to those affected by the shooting at YouTube yesterday. He then delivered this mea culpa on privacy:

We’re an idealistic and optimistic company . . . but it’s clear now that we didn’t do enough. We didn’t focus enough on preventing abuse and thinking through how people could use these tools to do harm as well . . . We didn’t take a broad enough view of what our responsibility is and that was a huge mistake. That was my mistake.

It’s not enough to just connect people. We have to make sure those connections are positive and that they’re bringing people together.  It’s not enough just to give people a voice, we have to make sure that people are not using that voice to hurt people or spread misinformation. And it’s not enough to give people tools to sign into apps, we have to make sure that all those developers protect people’s information too.

It’s not enough to have rules requiring that they protect the information. It’s not enough to believe them when they’re telling us they’re protecting information. We actually have to ensure that everyone in our ecosystem protects people’s information.”

This is Zuckerberg’s strongest statement yet about his and Facebook’s failure to anticipate worst-case scenarios, which has led to a string of scandals that are now decimating the company’s morale. Spelling out how policy means nothing without enforcement, and pairing that with a massive reduction in how much data app developers can request from users makes it seem like Facebook is ready to turn over a new leaf.

Here are the highlights from the rest of the call:

On Zuckerberg calling fake news’ influence “crazy”: “I clearly made a mistake by just dismissing fake news as crazy — as having an impact . . . it was too flippant. I never should have referred to it as crazy.

On deleting Russian trolls: Not only did Facebook delete 135 Facebook and Instagram accounts belonging to Russian government-connected election interference troll farm the Internet Research Agency, as Facebook announced yesterday. Zuckerberg said Facebook removed “a Russian news organization that we determined was controlled and operated by the IRA”.

On the 87 million number: Regarding today’s disclosure that up to 87 million people had their data improperly access by Cambridge Analytica, “it very well could be less but we wanted to put out the maximum that we felt it could be as soon as we had that analysis.” Zuckerberg also referred to The New York Times’ report, noting that “We never put out the 50 million number, that was other parties.”

On users having their public info scraped: Facebook announced this morning that “we believe most people on Facebook could have had their public profile scraped” via its search by phone number or email address feature and account recovery system. Scammers abused these to punch in one piece of info and then pair it to someone’s name and photo . Zuckerberg said search features are useful in languages where it’s hard to type or a lot of people have the same names. But “the methods of react limiting this weren’t able to prevent malicious actors who cycled through hundreds of thousands of IP addresses and did a relatively small number of queries for each one, so given that and what we know to day it just makes sense to shut that down.”

On when Facebook learned about the scraping and why it didn’t inform the public sooner: This was my question, and Zuckerberg dodged, merely saying “We looked into this and understood it more over the last few days as part of the audit of our overall system”, while declining to specify when Facebook first identified the issue.

On implementing GDPR worldwide: Zuckerberg refuted a Reuters story from yesterday saying that Facebook wouldn’t bring GDPR privacy protections to the U.S. and elsewhere. Instead he says, “we’re going to make all the same controls and settings available everywhere, not just in Europe.”

On if board has discussed him stepping down as chairman: “Not that I’m aware of” Zuckerberg said happily.

On if he still thinks he’s the best person to run Facebook: “Yes. Life is about learning from the mistakes and figuring out what you need to do to move forward . . . I think what people should evaluate us on is learning from our mistakes . . .and if we’re building things people like and that make their lives better . . . there are billions of people who love the products we’re building.”

On the Boz memo and prioritizing business over safety: “The things that makes our product challenging to manage and operate are not the tradeoffs between people and the business. I actually think those are quite easy because over the long-term, the business will be better if you serve people. I think it would be near-sighted to focus on short-term revenue over people, and I don’t think we’re that short-sighted. All the hard decisions we have to make are tradeoffs between people. Different people who use Facebook have different needs. Some people want to share political speech that they think is valid, and other people feel like it’s hate speech . . . we don’t always get them right.”

On whether Facebook can audit all app developers: “We’re not going to be able to go out and necessarily find every bad use of data” Zuckerberg said, but confidently said “I actually do think we’re going to be be able to cover a large amount of that activity.

On whether Facebook will sue Cambridge Analytica: “We have stood down temporarily to let the [UK government] do their investigation and their audit. Once that’s done we’ll resume ours … and ultimately to make sure none of the data persists or is being used improperly. And at that point if it makes sense we will take legal action if we need to do that to get people’s information.”

On how Facebook will measure its impact on fixing privacy: Zuckerberg wants to be able to measure “the prevalence of different categories of bad content like fake news, hate speech, bullying, terrorism. . . That’s going to end up being the way we should be held accountable and measured by the public . . .  My hope is that over time the playbook and scorecard we put out will also be followed by other internet platforms so that way there can be a standard measure across the industry.”

On whether Facebook should try to earn less money by using less data for targeting “People tell us if they’re going to see ads they want the ads to be good . . . that the ads are actually relevant to what they care about . . On the one hand people want relevant experiences, and on the other hand I do think there’s some discomfort with how data is used in systems like ads. But I think the feedback is overwhelmingly on the side of wanting a better experience. Maybe it’s 95-5.”

On whether #DeleteFacebook has had an impact on usage or ad revenue: “I don’t think there’s been any meaningful impact that we’ve observed…but it’s not good.”

On the timeline for fixing data privacy: “This is going to be a never-ending battle. You never fully solve security. It’s an arms race” Zuckerberg said early in the call. Then to close Q&A, he said “I think this is a multi-year effort. My hope is that by the end of this year we’ll have turned the corner on a lot of these issues and that people will see that things are getting a lot better.”

Overall, this was the moment of humility, candor, and contrition Facebook desperately needed. Users, developers, regulators, and the company’s own employees have felt in the dark this last month, but Zuckerberg did his best to lay out a clear path forward for Facebook. His willingness to endure this question was admirable, even if he deserved the grilling.

The company’s problems won’t disappear, and its past transgressions can’t be apologized away. But Facebook and its leader have finally matured past the incredulous dismissals and paralysis that characterized its response to past scandals. It’s ready to get to work.

Facebook plans crackdown on ad targeting by email without consent

Facebook is scrambling to add safeguards against abuse of user data as it reels from backlash over the Cambridge Analytica scandal. Now TechCrunch has learned Facebook will launch a certification tool that demands that marketers guarantee email addresses used for ad targeting were rightfully attained. This new Custom Audiences certification tool was described by Facebook representatives to their marketing clients, according to two sources. Facebook will also prevent the sharing of Custom Audience data across Business accounts.

This snippet of a message sent by a Facebook rep to a client notes that “for any Custom Audiences data imported into Facebook, Advertisers will be required to represent and warrant that proper user content has been obtained.”

Once shown the message, Facebook spokesperson Elisabeth Diana told TechCrunch “I can confirm there is a permissions tool that we’re building.” It will require that advertisers and the agencies representing them pledge that “I certify that I have permission to use this data”, she said.

Diana noted that “We’ve always had terms in place to ensure that advertisers have consent for data they use but we’re going to make that much more prominent and educate advertisers on the way they can use the data.” The change isn’t in response to a specific incident, but Facebook does plan to re-review the way it works with third-party data measurement firms to ensure everything is responsibly used. This is a way to safeguard data” Diana concluded.The company declined to specify whether it’s ever blocked usage of a Custom Audience because it suspected the owner didn’t have user consent. ”

The social network is hoping to prevent further misuse of ill-gotten data after Dr. Aleksandr Kogan’s app that pulled data on 50 million Facebook users was passed to Cambridge Analytica in violation of Facebook policy. That sordid data is suspected to have been used by Cambridge Analytica to support the Trump and Brexit campaigns, which employed Custom Audiences to reach voters.

Facebook launched Custom Audiences back in 2012 to let businesses upload hashed lists of their customers email addresses or phone numbers, allowing advertisers to target specific people instead of broad demographics. Custom Audiences quickly became one of Facebook’s most powerful advertising options because businesses could easily reach existing customers to drive repeat sales. The Custom Audiences terms of service require that businesses have “provided appropriate notice to and secured any necessary consent from the data subjects” to attain and use these people’s contact info.

But just like Facebook’s policy told app developers like Kogan not to sell, share, or misuse data they collected from Facebook users, the company didn’t go further to enforce this rule. It essentially trusted that the fear of legal repercussions or suspension on Facebook would deter violations of both its app data privacy and Custom Audiences consent policies. With clear financial incentives to bend or break those rules and limited effort spent investigating to ensure compliance, Facebook left itself and its users open to exploitation.

Last week Facebook banned the use of third-party data brokers like Experian and Acxiom for ad targeting, closing a marketing featured called Partner Categories. Facebook is believed to have been trying to prevent any ill-gotten data from being laundered through these data brokers and then directly imported to Facebook to target users. But that left open the option for businesses to compile illicit data sets or pull them from data brokers, then upload them to Facebook as Custom Audiences by themselves.

The Custom Audiences certification tool could close that loophole. It’s still being built, so Facebook wouldn’t say exactly how it will work. I asked if Facebook would scan uploaded user lists and try to match them against a database of suspicious data, but for now it sounds more like Facebook will merely require a written promise.

Meanwhile, barring the sharing of Custom Audiences between Business Accounts might prevent those with access to email lists from using them to promote companies unrelated to the one to which users gave their email address. Facebook declined to comment on how the new ban on Custom Audience sharing would work.

Now Facebook must find ways to thwart misuse of its targeting tools and audit anyone it suspects may have already violated its policies. Otherwise it may receive the ire of privacy-conscious users and critics, and strengthen the case for substantial regulation of its ads (though regulation could end up protecting Facebook from competitors who can’t afford compliance). Still the question remains why it took such a massive data privacy scandal for Facebook to take a tougher stance on requiring user consent for ad targeting. And given that written promises didn’t stop Kogan or Cambridge Analytica from misusing data, why would they stop advertisers bent on boosting profits?

For more on Facebook’s recent scandals, check out TechCrunch’s coverage:

 

Baidu’s streaming video service iQiyi falls 13.6% in Nasdaq debut

The streaming video service iQiyi, a business owned by China’s online search giant Baidu, dropped 13.6% in its first day of trading on the Nasdaq — closing at $15.55, or down $2.45 from its opening price of $18.

The company still managed to pull off one of the largest public offerings by a Chinese tech company in the past two years raising $2.25 billion — the only Chinese technology company to make a larger splash in U.S. markets is Alibaba — the commercial technology juggernaut which raised $21.5 billion in its public offering on the New York Stock Exchange in 2014.

“It’s a special day and an exciting day for iQiyi, and I will say it’s also an exciting day for the Chinese internet,” said Baidu chief executive Robin Li of the iQiyi public offering.”Eight years ago, when we got started, we were not the first one, we were not the largest one, but we gradually worked our way up, and caught up and surpassed everyone. It has been not an easy journey, but finally we are public. We surpassed everyone. That’s because we have a very strong team. I have a full confidence on Gong Yu and on the whole iQiyi Team.”

Over its eight year history there’s no doubt that iQiyi has gone from laggardly to lustrous in the Chinese streaming video market. Baidu’s offering and Tencent’s video service have both managed to overtake the previous market leader Youku Tudou, which was acquired by Alibaba in 2016.

Tencent leveraged its 980 million monthly active users on the WeChat mobile messaging app, the 653 million monthly active users on its older QQ messaging platform and the company’s attendant social network (think Facebook) to juice growth of its video streaming offering, according to analysis from The Motley Fool.

For Baidu, the company’s pole position for online search became critical to the growth of iQiyi — along with a partnership to China’s ubiquitous hardware manufacturer and technology developer Xiaomi . The company also locked in early content licensing deals with big Hollywood studios like Lions Gate and Paramount — and a deal with Netflix to juice its subscriber base in China. By the end of 2017, Baidu was claiming more than 487 million monthly active users for the service.

The former leader in China’s video streaming market, Youku Tudou, seems to have wilted under the weight of its acquirer’s platform. Alibaba’s ecommerce was never a natural fit with online video streaming.

For all of their massive user bases each of China’s leading video streaming services face a profitability problem. For its part, iQiyi went to market with substantial losses of $574.4 million for the last fiscal year.

 

 

Scientist at centre of Facebook scandal didn’t think data would be used to target voters

TwitterFacebook

The man who helped gather Facebook users’ information for Cambridge Analytica claims that he didn’t think it’d be used to target voters.

Data scientist Aleksandr Kogan, who also goes by the surname of Spectre, told CNN‘s Anderson Cooper on Tuesday that he was “heavily siloed” from knowing about the UK data firm’s clients and funders, who are linked to the 2016 Trump election campaign.

“I found out about Donald Trump just like everybody else, through the news,” Kogan told the program. 

Exclusive: Aleksandr Kogan, the data scientist who worked with Cambridge Analytica to harvest data, tells @AndersonCooper he didn’t know they would use the data to target voters. Full interview, tonight on 9p ET, on @CNN https://t.co/9L3itGMW79 pic.twitter.com/z4ny9vytCp

— Anderson Cooper 360° (@AC360) March 21, 2018 Read more…

More about Facebook, Social Media, Data Breach, Cambridge Analytica, and Social Media Companies

Facebook’s latest privacy debacle stirs up more regulatory interest from lawmakers

Facebook’s late Friday disclosure that a data analytics company with ties to the Trump campaign improperly obtained — and then failed to destroy — the private data of 50 million users is generating more unwanted attention from politicians, some of whom were already beating the drums of regulation in the company’s direction.

On Saturday morning, Facebook dove into the semantics of its disclosure, arguing against wording in the New York Times story the company was attempting to get out in front of that referred to the incident as a breach. Most of this happened on the Twitter account of Facebook chief security officer Alex Stamos before Stamos took down his tweets and the gist of the conversation made its way into an update to Facebook’s official post.

“People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked,” the added language argued.

I have deleted my Tweets on Cambridge Analytica, not because they were factually incorrect but because I should have done a better job weighing in.

— Alex Stamos (@alexstamos) March 17, 2018

While the language is up for debate, lawmakers don’t appear to be looking kindly on Facebook’s arguably legitimate effort to sidestep data breach notification laws that, were this a proper hack, could have required the company to disclose that it lost track of the data of 50 million users, only 270,000 of which consented to data sharing to the third party app involved. (In April of 2015, Facebook changed its policy, shutting down the API that shared friends data with third-party Facebook apps that they did not consent to sharing in the first place.)

While most lawmakers and politicians haven’t crafted formal statements yet (expect a landslide of those on Monday), a few are weighing in. Minnesota Senator Amy Klobuchar calling for Facebook’s chief executive — and not just its counsel — to appear before the Senate Judiciary committee.

Facebook breach: This is a major breach that must be investigated. It’s clear these platforms can’t police themselves. I’ve called for more transparency & accountability for online political ads. They say “trust us.” Mark Zuckerberg needs to testify before Senate Judiciary.

— Amy Klobuchar (@amyklobuchar) March 17, 2018

Senator Mark Warner, a prominent figure in tech’s role in enabling Russian interference in the 2016 U.S. election, used the incident to call attention to a piece of bipartisan legislation called the Honest Ads Act, designed to “prevent foreign interference in future elections and improve the transparency of online political advertisements.”

“This is more evidence that the online political advertising market is essentially the Wild West,” Warner said in a statement. “Whether it’s allowing Russians to purchase political ads, or extensive micro-targeting based on ill-gotten user data, it’s clear that, left unregulated, this market will continue to be prone to deception and lacking in transparency.”

That call for transparency was echoed Saturday by Massachusetts Attorney General Maura Healey who announced that her office would be launching an investigation into the situation. “Massachusetts residents deserve answers immediately from Facebook and Cambridge Analytica,” Healey tweeted. TechCrunch has reached out to Healey’s office for additional information.

On Cambridge Analytica’s side, it looks possible that the company may have violated Federal Election Commission laws forbidding foreign participation in domestic U.S. elections. The FEC enforces a “broad prohibition on foreign national activity in connection with elections in the United States.”

“Now is a time of reckoning for all tech and internet companies to truly consider their impact on democracies worldwide,” said Nuala O’Connor, President of the Center for Democracy & Technology. “Internet users in the U.S. are left incredibly vulnerable to this sort of abuse because of the lack of comprehensive data protection and privacy laws, which leaves this data unprotected.”

Just what lawmakers intend to do about big tech’s latest privacy debacle will be more clear come Monday, but the chorus calling for regulation is likely to grow louder from here on out.

Facebook suspends Trump-linked data firm Cambridge Analytica

TwitterFacebook

A data analytics firm linked to both Donald Trump’s presidential campaign and the Brexit referendum has been banned by Facebook.

Cambridge Analytica, the British firm that claimed it helped Trump get elected, has been suspended from Facebook, the company revealed. 

At issue is Cambridge Analytica’s use of user data obtained by a third-party developer, a University of Cambridge professor named Dr. Aleksandr Kogan. Kogan, according to Facebook, obtained information on 270,000 Facebook users via his app, which he touted as a research experiment.  Read more…

More about Tech, Facebook, Social Media Companies, Tech, and Social Media Companies

UN officials blast Facebook over spread of Rohingya hate speech

TwitterFacebook

Facebook has long been criticised for its role in the Rohingya crisis, an assessment now underscored by comments by United Nations investigators.

Marzuki Darusman, chairman of the UN Independent International Fact-Finding Mission in Myanmar told reporters that social media had a “determining role” in spreading hate speech in the country, according to Reuters.

“It has … substantively contributed to the level of acrimony and dissension and conflict, if you will, within the public. Hate speech is certainly of course a part of that. As far as the Myanmar situation is concerned, social media is Facebook, and Facebook is social media,” Darusman said. Read more…

More about Tech, Facebook, United Nations, Social Media, and Myanmar

Facebook to publishers: The News Feed algorithm isn’t why you’re failing

TwitterFacebook

It’s been a rocky year in Facebook and publisher relations, but the social network has a new — very blunt — message for struggling publishers: it’s probably your fault. 

Speaking at a panel at South by Southwest, Facebook’s head of news products, Alex Hardiman, had some strong words for critics who say the company’s recent News Feed algorithm change is hurting publishers. 

In response to a question about digital publisher Little Things, whose CEO blamed Facebook’s News Feed algorithm after the company shut down, Hardiman said “there’s a reason certain publishers don’t do well on Facebook.” Read more…

More about Tech, Facebook, Media, News Feed, and Tech

Facebook will verify the location of U.S. election ad buyers by mailing them postcards

 Facebook’s global director of policy programs says it will start sending postcards by snail mail to verify buyers of ads related to United States elections. Katie Harbath, who described the plan at a conference held by the National Association of Secretaries of State this weekend, didn’t reveal when the program will start, but told Reuters that it would be before the… Read More

Finally, tech’s elite speak out against Silicon Valley’s unchecked power

TwitterFacebook

They helped create Facebook, Google, and other companies who claim to bring the world together. But on Monday evening, these people gathered to discuss how tech products are tearing us apart. 

“Facebook created a business model that essentially made people who believe [conspiracy theories] more valuable,” said Roger McNamee, an early advisor to Mark Zuckerberg, speaking at an event at The New School in New York City titled “The Dark Side of Design: A Conversation About Addictive Technology. “It was in [Facebook’s] interest to appeal to fear and anger.”

McNamee is one of the founders of the Center for Humane Technology, a new coalition of tech creators dedicated to studying the effects of technology. This week, the group announced a partnership with nonprofit media watchdog group Common Sense Media to launch an ad campaign on tech addiction.  Read more…

More about Facebook, Google, Smartphones, Apps And Software, and Roger Mcnamee

This chatbot wants to cut through the noise on climate science

TwitterFacebook

Noise and misinformation, especially on climate, has long been a problem on social media.

To counter this, Australian not-for-profit the Climate Council has created a Facebook Messenger chatbot to inform people about climate science.

Launched on its Facebook page last week, it’s an effort to connect with younger people who are interested in issues like climate change, but aren’t the most engaged with the organisation — largely due to broader information overload.

“Young people are saturated on social media because they’re the most active on it, we know that they care and that they’ve got the thirst for information,” Nelli Huié, digital manager at the Climate Council, explained. Read more…

More about Facebook, Australia, Climate, Science, and Climate Change

Someone’s impersonating Chris Pratt on Facebook and he’s not happy about it

TwitterFacebook

Chris Pratt is a known prankster, but his latest Instagram post isn’t a joke at all. 

The Guardians of the Galaxy actor just discovered a fake profile claiming to be him on Facebook, and has taken upon himself to make sure no one is duped by the Chris Pratt that’s luring young female fans “trying to get their numbers and who knows what else.”

He sent out a warning on Instagram Thursday night, hoping to alert people to the dangers of the imposter. 

More about Facebook, Chris Pratt, Culture, and Celebrities

Indonesia wanted to block WhatsApp because people are sending ‘obscene GIFs’

TwitterFacebook

WhatsApp appears to be the latest social media platform to run afoul of Indonesia’s censorship rules.

The populous Southeast Asian nation on Monday vowed to block WhatsApp within 48 hours, if it did not ensure that “obscene” GIFs were removed from the platform.

Indonesia later dropped its threat after Tenor, WhatsApp’s third party GIF provider, appeared to have fixed the issue.

“We see now that they have done what we asked. Therefore we won’t block them,” the director general from Indonesia’s communication and informatics ministry, Semuel Pangerapan told Reuters on Tuesday. Read more…

More about Facebook, Gifs, Asia, Censorship, and Indonesia

Powered by WPeMatico

Facebook’s Workplace, now at 30,000 orgs, adds Chat desktop apps and group video chat

 It’s been once year since Workplace, Facebook’s social network designed specifically for businesses and other organizations, came out of beta to take on the likes of Slack, Atlassian, Microsoft and others in the world of enterprise collaboration. Now, with 30,000 organizations using Workplace across some 1 million groups (more than double the figures Facebook published April)… Read More

Powered by WPeMatico

Facebook’s Workplace is turning into a serious Slack competitor

TwitterFacebook

Facebook may be the last company you’d ever expect to make software for serious businesses, but the social network is quickly proving the haters wrong.

A year after officially launching Workplace, the business-focused version of Facebook, the service now counts more than 30,000 businesses and organizations using the software, Facebook announced Thursday. 

That group, more than double what Workplace claimed six months ago, includes names like Starbucks, Spotify, Lyft, and Walmart.

Though not as huge as some of its biggest competitors — less than a year in Microsoft Teams counts more than 125,000 organizations — the growth is impressive, considering that it wasn’t that long ago that the idea of Facebook launching professional software seemed like more of a joke than anything else. Read more…

More about Tech, Facebook, Workplace, Apps And Software, and Social Media Companies

Powered by WPeMatico

Facebook comments might soon get colored backgrounds, because we all deserve to suffer

TwitterFacebook

Oh, no. First it was the colored statuses — those awful, ugly, often-gradient-based abominations that pop up in your Facebook feed now and then — but we thought at least our Facebook comments are safe. 

Well, not any more. It appears that Facebook is testing comments with colored backgrounds. 

First spotted by The Next Web on Wednesday, the feature allows users to choose from a solid or gradient color for the comment’s background, just like the statuses.

The feature currently appears to only be available on mobile, and only to a small subset of users. We’ve checked a dozen or so phones here at Mashable, and no one had the feature available.  Read more…

More about Facebook, Colored, Comments, Tech, and Social Media Companies

Powered by WPeMatico

Let's all take a deep breath and stop freaking out about Facebook's bots 'inventing' a new language

TwitterFacebook

Tesla CEO Elon Musk made headlines last week when he tweeted about his frustrations that Mark Zuckerberg, ever the optimist, doesn’t fully understand the potential danger posed by artificial intelligence. 

So when media outlets began breathlessly re-reporting a weeks-old story that Facebook’s AI-trained chatbots “invented” their own language, it’s not surprising the story caught more attention than it did the first time around.

Understandable, perhaps, but it’s exactly the wrong thing to be focusing on. The fact that Facebook’s bots “invented” a new way to communicate wasn’t even the most shocking part of the research to begin with. Read more…

More about Tech, Facebook, Artificial Intelligence, Apps And Software, and Tech

Powered by WPeMatico

Instagram should let users save and share their photo preferences

 While the world waits for Instagram to launch a location-sharing feature à la Snapchat, it’s worth wondering about the potential arrival of something far more simple and obvious: user-preset filters.
Instagram now allows you to prioritize your favorite filters at the beginning of the list and leave the ones that you don’t use often at the end. However, each user has their own… Read More

Powered by WPeMatico

Crunch Report | Facebook Helps You Find Wi-Fi

Crunch Report June 30 Today’s Stories

Facebook is rolling out its ‘Find Wi-Fi’ feature worldwide
Delivery Hero’s valuation surpasses $5B following successful IPO
Chat app Kakao raises $437M for its Korean ride-hailing service
Cabin secures $3.3M for its ‘moving hotel’

Credits
Written and Hosted by: Anthony Ha
Filmed by: Matthew Mauro
Edited by: Chris Gates
Notes:
Tito… Read More

Powered by WPeMatico

Crunch Report | Apple Rolls Out Early Version Of Its Safe Driving Feature

Crunch Report June 22 Today’s Stories  Do Not Disturb While Driving feature rolls out in Apple’s newest iOS 11 beta Sean Parker has left Spotify’s board; Padmasree Warrior, Thomas Staggs join in lead up to IPO Trump might kill next month’s new startup visa before it takes effect Facebook is testing a feature to prevent profile pictures being abused by other users Tantan, China’s… Read More

Powered by WPeMatico