security

Auto Added by WPeMatico

Apple releases iPhone, iPad, and Watch security patches for zero-day bug under active attack

Apple has released an update for iPhones, iPads and Watches to patch a security vulnerability under active attack by hackers.

The security update lands as iOS 14.4.2 and iPadOS 14.4.2, which also covers a patch to older devices as iOS 12.5.2. watchOS also updates to 7.3.3.

Apple said the vulnerability, discovered by security researchers at Google’s Project Zero, may have been “actively exploited” by hackers. The bug is found in WebKit, the browser engine that powers the Safari browser across all Apple devices.

It’s not known who is actively exploiting the vulnerabilities, or who might have fallen victim. Apple did not say if the attack was targeted against a small subset of users or if it was a wider attack. It’s the third time (by our count) that Apple has pushed out a security-only update this year to fix flaws under active attack. Earlier this month the company released patches for similar vulnerabilities in WebKit.

Update today.

Indian state government website exposed COVID-19 lab test results

A security flaw in a website run by the government of West Bengal in India exposed the lab results of at least hundreds of thousands of residents, though likely millions, who took a COVID-19 test.

The website is part of the West Bengal government’s mass coronavirus testing program. Once a COVID-19 test result is ready, the government sends a text message to the patient with a link to its website containing their test results.

But security researcher Sourajeet Majumder found that the link containing the patient’s unique test identification number was scrambled with base64 encoding, which can be easily converted using online tools. Because the identification numbers were incrementally sequenced, the website bug meant that anyone could change that number in their browser’s address bar and view other patients’ test results.

The test results contain the patient’s name, sex, age, postal address, and if the patient’s lab test result came back positive, negative, or inconclusive for COVID-19.

Majumder told TechCrunch that he was concerned a malicious attacker could scrape the site and sell the data. “This is a privacy violation if somebody else gets access to my private information,” he said.

Two COVID-19 lab test results, but with details redacted, to show what kind of data has been exposed.

Two redacted COVID-19 lab test results exposed as a result of a security vulnerability on the West Bengal government’s website. (Screenshot: TechCrunch)

Majumder reported the vulnerability to India’s CERT, the country’s dedicated cybersecurity response unit, which acknowledged the issue in an email. He also contacted the West Bengal government’s website manager, who did not respond. TechCrunch independently confirmed the vulnerability and also reached out to the West Bengal government, which pulled the website offline, but did not return our requests for comment.

TechCrunch held our report until the vulnerability was fixed or no longer presented a risk. At the time of publication, the affected website remains offline.

It’s not known exactly how many COVID-19 lab results were exposed because of this security lapse, or if anyone other than Majumder discovered the vulnerability. At the time the website was pulled offline at the end of February, the state government had tested more than 8.5 million residents for COVID-19.

West Bengal is one of the most populated states of India, with about 90 million residents. Since the start of the pandemic, the state government has recorded more than 10,000 coronavirus deaths.

It’s the latest of several security incidents in the past few months to hit India and its response to the coronavirus pandemic.

Last May, India’s largest cell network Jio admitted a security lapse after a security researcher found a database containing the company’s coronavirus symptom checker, which Jio had launched months earlier.

In October, a security researcher found Dr Lal PathLabs left hundreds of spreadsheets containing millions of patient booking records — including for COVID-19 tests — on a public storage server that was not protected with a password, allowing anyone to access sensitive patient data.


Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using SecureDrop.

4 Great Ways To Improve The Safety And Security Of Nursing Homes

Working at a nursing home can be stressful. You’re responsible not just for the well-being of elderly residents, but also for their safety and security — 24/7.

No matter how talented your workforce is, you still need technology to help. Here are four great ways to embrace technology and reinforce the safety and security of your nursing home.

Incorporate In-Room Patient Monitoring

Patients can’t realistically have around-the-clock staff assistance in their rooms, and so, it’s essential to have in-room monitoring to ensure resident safety and to alert staff of any important activities.

For instance, you can install motion sensors inside rooms to monitor the patient’s movement. Such devices could be triggered when residents need help getting out of bed, notifying staff members to assist them.

Leverage 24/7 Security Camera Surveillance

patient safety in nursing homes

Cameras are vital not just to monitor resident activity and boost staff productivity, but for the overall security of your nursing home. 

Apart from having 24/7 video surveillance in hallways and elevators, your nursing home should have video cameras for entrance/exits and parking lots, too. Cameras can help prevent theft in the facility and enhance security for residents, staff, and guests.

Furthermore, video surveillance helps ensure the quality assurance of your staff in administering medication and keeping track of other resources. In the event of theft, video footage can help settle liability and theft claims and reduce workers’ compensation.

Ensure Complete Perimeter Access Control

Video surveillance plays a vital role in perimeter security, but perimeter access control helps prevent trespassing and keeping patients from exiting the building without staff to escort them.

Rather than basic mechanical locks, your facility should have electronic access control with access cards for staff. The main entrance should be equipped with an intercom system to ensure anyone who enters the property has the authority to do so.

Access control is particularly critical for memory care units, as patients with dementia or Alzheimer’s may wander off to other parts of the building or even exit the building if it’s not properly secured.

Specialized access control for memory care units in your nursing home may include:

  • Credential technology for entry/exit, such as keypads with PIN codes for staff, so patients can only access certain parts of the building with accompanying staff.
  • Electronic bands worn by patients that interface with door-mounted readers. If a patient passes through a restricted door, the system sets off an alarm for your staff.
  • Limited entry/exit points (with a fire exit), to make it easy for your staff to track patient movement in the facility.

Don’t Forget Fire Safety

improve patient safety in nursing homes

Fire safety is imperative for any property, but especially so for nursing homes where residents may have mobility problems and require more time to evacuate in case of emergencies.

Besides, sources of fire threats are just about everywhere in nursing homes, including combustible medical equipment like oxygen tanks, kitchen stoves, and other appliances.

So, there should be no compromises when it comes to having functional smoke detectors, carbon monoxide alarms, fire sprinklers, and fire extinguishers, together with unmistakable fire escape routes and plans.

Other Safety Questions to Ask

Apart from the four security measures outlined above, below are a few quick questions to consider when determining the security measures taken by your nursing home:

  • Is the property gated properly and well-lit?
  • Is there a place in each patient’s room where valuables can be secured?
  • Do you conduct background checks on new hires?
  • Do you have qualified security personnel?
  • Is the staff well-trained to tackle various security issues?

Closing Thoughts on Nursing Home Security Measures

Running a nursing home is arduous, to say the least. You want your residents to live comfortably and stress-free, instead of being concerned about their safety and security.

From in-room monitoring to video surveillance, and perimeter access control to fire safety — make sure to cover all of these points in your nursing home to make life easier for you and your residents.

The post 4 Great Ways To Improve The Safety And Security Of Nursing Homes appeared first on Dumb Little Man.

How To Protect Yourself From SMS Scams

Smartphones and cell phones have changed the way we communicate. We can instantly send and receive messages through wireless calls, text messages or SMS, email, and social media platforms. With the advancements of technology, there are advantages and disadvantages—and sometimes danger.

SMS scams or text scams are prevalent in this age of smartphones and cell phones. Many scams are SMS marketing; some are more deceitful.

SMS marketing is sending promotional campaigns for marketing purposes via text messages. These messages are meant to communicate time-sensitive offers, updates, and alerts to potential or established customers.

Businesses use SMS marketing because it is one of the more effective forms of communicating with customers when done correctly. When done incorrectly, these can be viewed as unwanted messages and solicitations that bombard people regularly.

SMS scams are a problem because more and more people are using mobile banking and online shopping, making them easier targets for scammers and fraudsters. There are different types of scams people need to be aware of.

identifying a fake text message

  • Spam SMS messages usually notify the receiver that he/she won a prize and ask them to reply with personal information, including bank or credit card details.
  • Phishing is a method used by scammers by sending messages that pretend to be from a reputable company or someone the receiver knows and asks them to verify something with their personal details like passwords or bank information.
  • SMS spoofing is when the scammer sends text messages carrying the name or number of a well-known brand or company.
  • SMS Malware attacks are mobile malware scams that send out unsafe links. Some scammers will use specific tactics to get people to do something they aren’t aware are dangerous.
  • Family emergency is one of the most common ruses. The message will say that the recipient’s loved one got into an accident or is in trouble and needs immediate financial help. In a panic, the victim will then send what is being asked of them in the hopes of helping their loved ones, not realizing they are being duped.
  • Refund scam. The message will tell the victim that they will receive a refund from a service provider but need certain information for it to be processed. Using the information provided, the fraudster will gain access to the victim’s account. Do not respond to these types of texts and do not give away personal information right away.
  • Reactivation scam. The message will tell the victim that their account has been compromised and asks them to text a code or reset their password through a suspicious link to reactivate their “account.”
  • Prize scam. The scammer sends the victim a text message saying that they’ve won a prize or holiday getaway from a contest they didn’t enter. The text will also contain a link where the victim will input their personal information that the scammer will use to their advantage.
  • Parcel delivery scam. With online shopping and delivery becoming more prevalent today, scammers can now send fake messages imitating official couriers but will ask for additional information or extra charges from the recipient to ensure the delivery of their packages.

If you receive these text messages or anything similar, here are some ways to avoid falling into these SMS scams.

how to know a fake text message

  • Be on the lookout for unusual or unknown numbers. Most brands and companies will use verified numbers or use less than ten digits. If the number seems iffy, ignore or delete these messages or best to block.
  • Check for grammatical and spelling mistakes. All companies and brands use copywriters and editors to create marketing messages. Scammers will often make spelling and grammatical errors. This may seem simple, but it’s the easiest way to identify SMS scams.
  • Double-check messages. Recall if you’ve entered a contest or if there even really is one from a particular brand or company. If there isn’t, then it is definitely a scam. Report it to the proper authorities or the company itself so that they can warn their other customers of such activity.
  • Don’t click any links. If you received any SMS that contains a link, don’t click on it, as it is designed to steal information or spread malware. But do regularly change or update passwords of your online accounts or emails to make it more difficult for hackers to access these.
  • Don’t trust text messages that contain your name. Just because a text message has your name doesn’t make it genuine or legitimate. Chances are the scammer got your name from social media or other sources.
  • Verify the authenticity of the company or the sender. If you do not recognize the name of the company or brand that sent you the text message, do some digging to check if they have an official website or social media channels or if the company really exists.
  • Always verify the messages you received. Check with your relatives or friends if they are indeed in need of any assistance. Report these fraudulent messages to the telecommunication carrier you are using or your local government agency handling fraud cases.

Protect Your Information At All Times

Being alert and suspicious go a long way to protecting yourself from fraudsters who are taking advantage of technology to fool people out of their money. Verify details before establishing contact with anyone over the phone, SMS, or online.

The post How To Protect Yourself From SMS Scams appeared first on Dumb Little Man.

Cybersecurity startup SpiderSilk raises $2.25M to help prevent data breaches

Dubai-based cybersecurity startup SpiderSilk has raised $2.25 million in a pre-Series A round, led by venture firms Global Ventures and STV.

In the past two years, SpiderSilk has discovered some of the biggest data breaches: Blind, the allegedly anonymous social network that exposed private complaints by Silicon Valley employees; a lab leaked highly sensitive Samsung source code; an inadvertently public code repository revealed apps, code, and apartment building camera footage belonging to controversial facial recognition startup Clearview AI; and a massive spill of unencrypted customer card numbers at now-defunct MoviePass may have been the final nail in the already-beleaguered subscription service’s casket.

Much of those discoveries were found from the company’s proprietary internet scanner, SpiderSilk co-founder and chief security officer Mossab Hussein told TechCrunch.

Any company would want their data locked down, but mistakes happen and misconfigurations can leave sensitive internal corporate data accessible from the internet. SpiderSilk helps its customers understand their attack surface by looking for things that are exposed but shouldn’t be.

The cybersecurity startup uses its scanner to map out a company’s assets and attack surfaces to detect vulnerabilities and data exposures, and it also simulates cyberattacks to help customers understand where vulnerabilities are in their defenses.

“The attack surface management and threat detection platform we built scans the open internet on a continuous basis in order to attribute all publicly accessible assets back to organizations that could be affected by them, either directly or indirectly,” SpiderSilk’s co-founder and chief executive Rami El Malak told TechCrunch. “As a result, the platform regularly uncovers exploits and highlights how no organization is immune from infrastructure visibility blind-spots.”

El Malak said the funding will help to build out its security, engineering and data science teams, as well as its marketing and sales. He said the company is expanding its presence to North America with sales and engineering teams.

It’s the company’s second round of funding, after a seed round of $500,000 in November 2019, also led by Global Ventures and several angel investors.

“The SpiderSilk team are outstanding partners, solving a critical problem in the ever-complex world of cybersecurity, and protecting companies online from the increasing threats of malicious activity,” said Basil Moftah, general partner at Global Ventures.

Massachusetts governor won’t sign police reform bill with facial recognition ban

Massachusetts Governor Charlie Baker has returned a police reform bill back to the state legislature, asking lawmakers to strike out several provisions — including one for a statewide ban on police and public authorities using facial recognition technology, the first of its kind in the United States.

The bill, which also banned police from using rubber bullets and tear gas, was passed on December 1 by both the state’s House and Senate after senior lawmakers overcame months of deadlock to reach a consensus. Lawmakers brought the bill to the state legislature in the wake of the killing of George Floyd, an unarmed Black man who was killed by a white Minneapolis police officer, later charged with his murder.

Baker said in a letter to lawmakers that he objected to the ban, saying the use of facial recognition helped to convict several criminals, including a child sex offender and a double murderer.

In an interview with The Boston Globe, Baker said that he’s “not going to sign something that is going to ban facial recognition.”

Under the bill, police and public agencies across the state would be prohibited from using facial recognition, with a single exception to run facial recognition searches against the state’s driver license database with a warrant. The state would be required to publish annual transparency figures on the number of searches made by officers going forward.

The Massachusetts House voted to pass by 92-67, and the Senate voted 28-12 — neither of which were veto-proof majorities.

The Boston Globe said that Baker did not outright say he would veto the bill. After the legislature hands a revised (or the same) version of the bill back to the governor, it’s up to Baker to sign it, veto it or — under Massachusetts law, he could allow it to become law without his signature by waiting 10 days.

“Unchecked police use of surveillance technology also harms everyone’s rights to anonymity, privacy, and free speech. We urge the legislature to reject Governor Baker’s amendment and to ensure passage of commonsense regulations of government use of face surveillance,” said Carol Rose, executive director of the ACLU of Massachusetts.

A spokesperson for Baker’s office did not immediately return a request for comment.

A bug meant Twitter Fleets could still be seen after they disappeared

Twitter is the latest social media site to allow users to experiment with posting disappearing content. Fleets, as Twitter calls them, allows its mobile users post short stories, like photos or videos with overlaying text, that are set to vanish after 24 hours.

But a bug meant that fleets weren’t deleting properly and could still be accessed long after 24 hours had expired. Details of the bug were posted in a series of tweets on Saturday, less than a week after the feature launched.

full disclosure: scraping fleets from public accounts without triggering the read notification

the endpoint is: https://t.co/332FH7TEmN

— cathode gay tube (@donk_enby) November 20, 2020

The bug effectively allowed anyone to access and download a user’s fleets without triggering a notification that the user’s fleet had been read and by whom. The implication is that this bug could be abused to archive a user’s fleets after they expire.

Using an app that’s designed to interact with Twitter’s back-end systems via its developer API. What returned was a list of fleets from the server. Each fleet had its own direct URL, which when opened in a browser would load the fleet as an image or a video. But even after the 24 hours elapsed, the server would still return links to fleets that had already disappeared from view in the Twitter app.

When reached, a Twitter spokesperson said a fix was on the way. “We’re aware of a bug accessible through a technical workaround where some Fleets media URLs may be accessible after 24 hours. We are working on a fix that should be rolled out shortly.”

Twitter acknowledged that the fix means that fleets should now expire properly, it said it won’t delete the fleet from its servers for up to 30 days — and that it may hold onto fleets for longer if they violate its rules. We checked that we could still load fleets from their direct URLs even after they expire.

Fleet with caution.

Apple responds to Gatekeeper issue with upcoming fixes

Apple has updated a documentation page detailing the company’s next steps to prevent last week’s Gatekeeper bug from happening again, as Rene Ritchie spotted. The company plans to implement the fixes over the next year.

Apple had a difficult launch day last week. The company released macOS Big Sur, a major update for macOS. Apple then suffered from server-side issues.

Third-party apps failed to launch as your Mac couldn’t check the developer certificate of the app. That feature, called Gatekeeper, makes sure that you didn’t download a malware app that disguises itself as a legit app. If the certificate doesn’t match, macOS prevents the app launch.

Hey Apple users:

If you’re now experiencing hangs launching apps on the Mac, I figured out the problem using Little Snitch.

It’s trustd connecting to https://t.co/FzIGwbGRan

Denying that connection fixes it, because OCSP is a soft failure.

(Disconnect internet also fixes.) pic.twitter.com/w9YciFltrb

— Jeff Johnson (@lapcatsoftware) November 12, 2020

Many have been concerned about the privacy implications of the security feature. Does Apple log every app you launch on your Mac to gain competitive insights on app usage?

It turns out it’s easy to answer that question as the server doesn’t mandate encryption. Jacopo Jannone intercepted an unencrypted network request and found out that Apple is not secretly spying on you. Gatekeeper really does what it says it does.

“We have never combined data from these checks with information about Apple users or their devices. We do not use data from these checks to learn what individual users are launching or running on their devices,” the company wrote.

But Apple is going one step further and communicating on the company’s next steps. The company has stopped logging IP addresses on its servers since last week. It doesn’t have to store this data for Gatekeeper .

“These security checks have never included the user’s Apple ID or the identity of their device. To further protect privacy, we have stopped logging IP addresses associated with Developer ID certificate checks, and we will ensure that any collected IP addresses are removed from logs” Apple writes.

Finally, Apple is overhauling the design of the network request and adding a user-facing opt-out option.

“In addition, over the the next year we will introduce several changes to our security checks:

  • A new encrypted protocol for Developer ID certificate revocation checks
  • Strong protections against server failure
  • A new preference for users to opt out of these security protections”

‪CBP seized a shipment of OnePlus Buds thinking they were “counterfeit” Apple AirPods‬

U.S. Customs and Border Protection proudly announced in a press release on Friday a seizure of 2,000 boxes of “counterfeit” Apple AirPods, said to be worth about $400,000, from a shipment at John F. Kennedy Airport in New York.

But the photos in the press release appear to show boxes of OnePlus Buds, the wireless earphones made by smartphone maker OnePlus, and not Apple AirPods as CBP had claimed.

Here’s CBP’s photo of the allegedly counterfeit goods:

And this is what a box of OnePlus Buds looks like:

A photo of a box of OnePlus Buds that CBP mistook for Apple AirPods.

(Image: @yschugh/Twitter)

We reached out to OnePlus and CBP but did not hear back.

According to the press release: “The interception of these counterfeit earbuds is a direct reflection of the vigilance and commitment to mission success by our CBP Officers daily,” said Troy Miller, director of CBP’s New York Field Operations.

If only it was.

Extra Crunch Friday roundup: Edtech funding surges, Poland VC survey, inside Shift’s SPAC plan, more

I live in San Francisco, but I work an East Coast schedule to get a jump on the news day. So I’d already been at my desk for a couple of hours on Wednesday morning when I looked up and saw this:

What color is the sky this morning pic.twitter.com/nt5dZp5wWc

— Walter Thompson (@YourProtagonist) September 9, 2020

As unsettling as it was to see the natural environment so transformed, I still got my work done. This is not to boast: I have a desk job and a working air filter. (People who make deliveries in the toxic air or are homeschooling their children while working from home during a global pandemic, however, impress the hell out of me.)

Not coincidentally, two of the Extra Crunch stories that ran since our Tuesday newsletter tie directly into what’s going on outside my window:

As this guest post predicted, a suboptimal attempt I made to track a delayed package using interactive voice response (IVR) indeed poisoned my customer experience, and;

Sheltering in place to avoid the novel coronavirus — and wildfire smoke — is fueling growth in the video-game industry, perhaps one factor in Unity Software Inc.’s plan to go public ahead of competitor Epic Games. In a two-part series, we looked at how the company has expanded beyond games and shared a detailed financial breakdown.

We covered a lot of ground this week, so scroll down or visit the recently redesigned Extra Crunch home page. If you’d like to receive this roundup via email each Tuesday and Friday, please click here.

Thanks very much for reading Extra Crunch; I hope you have a relaxing and safe weekend.

Walter Thompson
Senior Editor
@yourprotagonist


Bear and bull cases for Unity’s IPO

In a two-part series that ran on TechCrunch and Extra Crunch, former media columnist Eric Peckham returned to share his analysis of Unity Software Inc.’s S-1 filing.

Part one is a deep dive that explains how the company has grown beyond gaming to develop multiple revenue streams and where it’s headed.

For part two on Extra Crunch, he studied the company’s numbers to offer some context for its approximately $11 billion valuation.


10 Poland-based investors discuss trends, opportunities and the road ahead

The Palace of Culture and Science is standing reminder of communism in Warsaw, Masovian Voivodeship, Poland.

Image Credits: Edwin Remsberg (opens in a new window) / Getty Images

As we’ve covered previously, the COVID-19 pandemic is making the world a lot smaller.

Investors who focus on their own backyards still have an advantage, but the ability to set up a quick coffee meeting with a promising investor is no longer one of them.

Even though some VCs are cutting first checks after Zoom calls, regional investors’ personal networks are still a trump card. Tourists will always rely on guide books, however, which is why we continue to survey investors around the world.

A Dealroom report issued this summer determined that 97 VC funds backed more than 1,600 funding rounds in Poland last year. With over 2,400 early- and late-stage startups and 400,000 engineers in the country, it’s easy to see why foreign investors are taking notice.

Editor-at-large Mike Butcher reached out to several investors who focus on Warsaw and Poland in general to learn more about the startups fueling their interest across fintech, gaming, security and other sectors:

  • Bryony Cooper, managing partner, Arkley Brinc VC
  • Anna Wnuk-Błażejczyk, investor relations manager, Experior.vc
  • Rafał Roszak, investment director, YouNick Mint
  • Michal Mroczkowski, partner, Market One Capital
  • Marcus Erken, partner, Sunfish Partners
  • Borys Musielak, partner, SMOK Ventures
  • Mathias Åsberg, partner, Nextgrid
  • Kuba Dudek, SpeedUp Venture Capital Group
  • Marcin Laczynski, partner, Next Road Ventures
  • Michał Rokosz, partner, Inovo Venture Partners

We’ll run the conclusion of his survey next Tuesday.


Brands that hyper-personalize will win the next decade

Customer Relationship Management and Leader Concepts on Whiteboard

Image Credits: cnythzl (opens in a new window) / Getty Images

Even for fledgling startups, creating a robust customer service channel — or at least one that doesn’t annoy people — is a reliable way to keep users in the sales funnel.

Using AI and automation is fine, but now that consumers have grown used to asking phones and smart speakers to predict the weather and read recipe instructions, their expectations are higher than ever.

If you’re trying to figure out what people want from hyper-personalized customer experiences and how you can operationalize AI to give them what they’re after, start here.


VCs pour funding into edtech startups as COVID-19 shakes up the market

For today’s edition of The Exchange, Natasha Mascarenhas joined Alex Wilhelm to examine how the pandemic-fueled surge of interest in edtech is manifesting on the funding front.

The numbers suggest that funding will far surpass the sector’s high-water mark set in 2018, so the duo studied the numbers through August 31, which included a number of mega-rounds that exceeded $100 million.

“Now the challenge for the sector will be keeping its growth alive in 2021, showing investors that their 2020 bets were not merely wagers made during a single, overheated year,” they conclude.


How to respond to a data breach

Digital Binary Code on Red Background. Cybercrime Concept

Image Credits: WhataWin (opens in a new window) / Getty Images

The odds are low that someone’s going to enter my home and steal my belongings. I still lock my door when I leave the house, however, and my valuables are insured. I’m an optimist, not a fool.

Similarly: Is your startup’s cybersecurity strategy based on optimism, or do you have an actual response plan in case of a data breach?

Security reporter Zack Whittaker has seen some shambolic reactions to security lapses, which is why he turned in a post-mortem about a corporation that got it right.

“Once in a while, a company’s response almost makes up for the daily deluge of hypocrisy, obfuscation and downright lies,” says Zack.


Shift’s George Arison shares 6 tips for taking your company public via a SPAC

Number 6 By Railroad Tracks During Sunset

Image Credits: Eric Burger/EyeEm (opens in a new window) / Getty Images

There’s a lot of buzz about special purpose acquisition companies these days.

Used-car marketplace Shift announced its SPAC in June 2020, and is on track to complete the process in the next few months, so co-founder/co-CEO George Arison wrote an Extra Crunch guest post to share what he has learned.

Step one: “If you go the SPAC route, you’ll need to become an expert at financial engineering.”


Dear Sophie: What is a J-1 visa and how can we use it?

Image Credits: Sophie Alcorn

Dear Sophie:

I am a software engineer and have been looking at job postings in the U.S. I’ve heard from my friends about J-1 Visa Training or J-1 Research.

What is a J-1 status? What are the requirements to qualify? Do I need to find a U.S. employer willing to sponsor me before I apply for one? Can I get a visa? How long could I stay?

— Determined in Delhi


As direct listing looms, Palantir insiders are accelerating stock sales

While we count down to the September 23 premiere of NYSE: PLTR, Danny Crichton looked at the “robust secondary market” that has allowed some investors to acquire shares early.

“Given the number of people involved and the number of shares bought and sold over the past 18 months, we can get some insight regarding how insiders perceive Palantir’s value,” he writes.


Use ‘productive paranoia’ to build cybersecurity culture at your startup

Vector illustration of padlocks and keys in a repeating pattern against a blue background.

Image Credits: JakeOlimb / Getty Images

Zack Whittaker interviewed Bugcrowd CTO, founder and chairman Casey Ellis about the best practices he recommends for creating a startup culture that takes security seriously.

“It’s an everyone problem,” said Ellis, who encouraged founders to promote the notion of “productive paranoia.”

Now that the threat envelope includes everyone from marketing to engineering, employees need to “internalize the fact that bad stuff can and does happen if you do it wrong,” Ellis said.

5 Steps for Managing Your IT Audit & Security Career In Strange Times

No one figured that this would be the year of uncertainty when we were making those ‘Mission 2020’ statements. For individuals, professionals, businesses, and humanity at large – these are trying times. Now, the question is, amid crisis how should we be thinking about our career management?

Constant surveys and key findings are pointing to what the future might hold for tech professionals. Interestingly, observing all such findings, one can infer that IT Audit, cyber, and information security jobs are more stable than other domains.

For now, this may be good news; however, how long can the center hold the things from falling apart is still a mystery.

Experts also found that challenges are not limited to job losses, insecurity, pay cuts, or furloughs, it goes beyond that. We, as a generation, are facing trouble in managing and balancing work and family, managing stress, and continuously staying connected with co-workers and remote teams in this era of ‘new normal’.

Tech community and IT professionals, considered as the most innovative people, are trying new and innovative ways to cope up with these blues and implementing new strategies for building connections and getting work done every day.

Career planning for 2020 and beyond, is still a daunting challenge. However, it need not be stressful, with some strategic and well-laid career planning methods in place.

Offered below are five impeccable and actionable steps that each IT Audit & Security professional can take to safeguard their career progress and prepare for the post-pandemic changes:

First and foremost, Update Your Profile for the Era

information technology career resume

Simply updating your profile with the latest job role and responsibilities is not enough in these times. Frankly, loading up your resume with self-descriptive adjectives such as  ‘accomplished security professional’ and ‘exceptional leader’ is passé.

No one is believing in these terms anymore. The most crucial contents that we can add to our profile in these times are our contributions in figures and facts, the value that we bring to the table, and the problems we can handle or solve beyond our regular KRAs. It’s also important that we describing them in a clear, factual manner. Demonstrate your skills, recent contributions, and major projects concisely to draw more attention.

Being an IT Audit and security professional, clearly mentioning your CISA certification or related credentials are highly appreciated among the fraternity.

Second, Keep a Record of Your Recent Projects

Nothing can be more helpful than maintaining an updated record of your yesteryear work projects. Create a journal [spreadsheet or document] for each year recording your projects. Identify who, what, when, how, tools implemented, duration, challenges and learning, and foremost the value it added to your journey.

Start from the present year and group them one-by-one chronologically. This will give you a clear picture of your growth and better perspective when you would be required to present yourself somewhere, for example, at performance reviews or future employment or simply for communication purposes with bosses and seniors.

Third, Make an Enticing Professional Community Profile

It is wise to update and create an intriguing profile on that popular professional community platform. Let the hiring managers headhunt you for the top-notch positions in your domain. Update your most valued projects.

It is also important that you mention your newly gained CISA certification credential on your profile as an IT Audit and Security expert. Ask your past seniors to drop by and give you a good recommendation.

Remember these recommendations are immensely helpful and reflect your credibility. Remember, that along with hiring managers, your past and present colleagues, bosses, and clients also get to look for your skills there.

It helps in your brand building and personal marketing as well. It is the best practice to work on your profile display. Make a periodic audit and review it accordingly.

Fourth, Stay Connected, Keep Your Network Updated

information technology career networking

In this era of online social networking, people may not drop by often and get to meet you or vice versa, but you can always stay connected with your best career references.

Keep a track of your networking activities, take time out to chat with your past colleagues, and drop a line to greet and video call your former boss. It is important to let them know about your new achievements and industry takeaways and share an interesting article with a note of your insight on it. Make them refresh their knowledge on your latest endeavors by sharing information on your latest certifications achieved.

Remember – ‘Inspiration is Contagious’. Constant strive for betterment might as well motivate others to take career progression seriously, even in these trying times, and focus on the future beyond this pandemic.

Fifth, Evaluate, Learn, and Gain those pending Certifications and Training

Professionals, especially in top management, experts, and leadership roles are preferably offered to those who have certifications that are globally recognized to validate top-level expertise. So, earning those industry-recognized certifications in IS Audit and Security domain, such as CISA certification, is significant to demonstrate your knowledge and your willingness to do whatever it takes to reach your goals. Beside, CISA certification guides your next move in your career.

Most importantly, even the certification bodies, like ISACA, has made changes in their exam pattern as part of today’s ‘new normal’. Now, the aspirants can take the ‘Online Remote Proctored Exam’.

This clearly indicates how committed ISACA and aspired professionals are towards career progress in the face of the pandemic. Even certification bodies address the need for uninterrupted access to their resources even when we all are restricted at our homes and making exams available with an online remote proctored option.

To enable aspirants to continue their learning and upskilling process, many training providers aligned their objectives and started promoting the CISA Online training via Instructor-led mode for equipping learners with knowledge and skills to pass the CISA Exam.

By applying these strategic ways in your career progress path, you will be ready for what comes next. Besides, being ready for future roles, you will gain more clarity on your current credentials, making job search and internal growth more effective.

As leaders of tomorrow, you will have done your job well!

The post 5 Steps for Managing Your IT Audit & Security Career In Strange Times appeared first on Dumb Little Man.

CBP says it’s ‘unrealistic’ for Americans to avoid its license plate surveillance

U.S. Customs and Border Protection has admitted that there is no practical way for Americans to avoid having their movements tracked by its license plate readers, according to its latest privacy assessment.

CBP published its new assessment — three years after its first — to notify the public that it plans to tap into a commercial database, which aggregates license plate data from both private and public sources, as part of its border enforcement efforts.

The U.S. has a massive network of license plate readers, typically found on the roadside, to collect and record the license plates of vehicles passing by. License plate readers can capture thousands of license plates each minute. License plates are recorded and stored in massive databases, giving police and law enforcement agencies the ability to track millions of vehicles across the country.

The agency updated its privacy assessment in part because Americans “may not be aware” that the agency can collect their license plate data.

“CBP cannot provide timely notice of license plate reads obtained from various sources outside of its control,” the privacy assessment said. “Many areas of both public and private property have signage that alerts individuals that the area is under surveillance; however, this signage does not consistently include a description of how and with whom such data may be shared.”

But buried in the document, the agency admitted: “The only way to opt out of such surveillance is to avoid the impacted area, which may pose significant hardships and be generally unrealistic.”

CBP struck a similar tone in 2017 during a trial that scanned the faces of American travelers as they departed the U.S., a move that drew ire from civil liberties advocates at the time. CBP told Americans that travelers who wanted to opt-out of the face scanning had to “refrain from traveling.”

The document added that the privacy risk to Americans is “enhanced” because the agency “may access [license plate data] captured anywhere in the United States,” including outside of the 100-mile border zone within which the CBP typically operates.

CBP said that it will reduce the risk by only accessing license plate data when there is “circumstantial or supporting evidence” to further an investigation, and will only let CBP agents access data within a five-year period from the date of the search.

A spokesperson for CBP did not respond to a request for comment on the latest assessment.

CBP doesn’t have the best track record with license plate data. Last year, CBP confirmed that a subcontractor, Perceptics, improperly copied license plate data on “fewer than 100,000” people over a period of a month-and-a-half at a U.S. port of entry on the southern border. The agency later suspended its contract with Perceptics.

Google reportedly cancelled a cloud project meant for countries including China

After reportedly spending a year and a half working on a cloud service meant for China and other countries, Google cancelled the project, called “Isolated Region,” in May due partly to geopolitical and pandemic-related concerns. Bloomberg reports that Isolated Region, shut down in May, would have enabled it to offer cloud services in countries that want to keep and control data within their borders.

According to two Google employees who spoke to Bloomberg, the project was part of a larger initiative called “Sharded Google” to create data and processing infrastructure that is completely separate from the rest of the company’s network. Isolated Region began in early 2018 in response to Chinese regulations that mean foreign tech companies that want to enter the country need to form a joint venture with a local company that would hold control over user data. Isolated Region was meant to help meet requirements like this in China and other countries, while also addressing U.S. national security concerns.

Bloomberg’s sources said the project was paused in China in January 2019, and focus was redirected to Europe, the Middle East and Africa instead, before Isolated Region was ultimately cancelled in May, though Google has since considered offering a smaller version of Google Cloud Platform in China.

After the story was first published, a Google representative told Bloomberg that Isolated Region wasn’t shut down because of geopolitical issues or the pandemic, and that the company “does not offer and has not offered cloud platform services inside China.”

Instead, she said Isolated Region was cancelled because “other approaches we were actively pursuing offered better outcomes. We have a comprehensive approach to addressing these requirements that covers the governance of data, operational practices and survivability of software. Isolated Region was just one of the paths we explored to address these requirements.”

Alphabet, Google’s parent company, broke out Google Cloud as its own line item for the first time in its fourth-quarter and full-year earnings report, released in February. It revealed that its run rate grew 53.6% during the last year to just over $10 billion in 2019, making it a more formidable rival to competitors Amazon and Microsoft.

Secretive data startup Palantir has confidentially filed for an IPO

Secretive big data and analytics startup Palantir, co-founded by Peter Thiel, said late Monday it has confidentially filed paperwork with the U.S. Securities and Exchange Commission to go public.

Its statement said little more. “The public listing is expected to take place after the SEC completes its review process, subject to market and other conditions.”

Palantir did not say when it plans to go public nor did it provide other information such as how many shares it would potentially sell or the share price range for the IPO . Confidential IPO filings allow companies to bypass the traditional IPO filing mechanisms that give insights into their inner workings such as financial figures and potential risks. Instead, Palantir can explore the early stages of setting itself up for a public listing without the public scrutiny that comes with the process. The strategy has been used by companies such as Spotify, Slack and Uber. However, a confidential filing doesn’t always translate to an IPO.

A Palantir spokesperson, when reached, declined to comment further.

Palantir is one of the more secretive firms in Silicon Valley, a provider of big data and analytics technologies, including to the U.S. government and intelligence community. Much of that work has drawn controversies from privacy and civil liberties activists. For example, investigations show that the company’s data mining software was used to create profiles of immigrants and consequently aid deportation efforts by the ICE.

As the coronavirus pandemic spread throughout the world, Palantir pitched its technology to bring big data to tracking efforts.

Last week, Palantir filed its first Form D in four years indicating that it is raising $961 million. According to the filing, $550 million has already been raised and capital commitments for the remaining allotment have been secured.

With today’s news, the cash raise looks complementary to the company’s ambitions to go public. One report estimates that the company’s valuation hovers at $26 billion.

Palantir’s filing is another example of how the IPO market is heating up yet again, despite the freeze COVID-19 put on so many companies. Last week, insurance provider Lemonade debuted on the public market to warm waters. Accolade, a healthcare benefits company, similarly is sold more shares than expected.

These free tools blur protesters’ faces and remove photo metadata

Millions have taken to the streets across the world to protest the murder of George Floyd, an unarmed black man killed by a white police officer in Minneapolis last month.

Protesters have faced both unprecedented police violence and surveillance. Just this week, the Justice Department granted the Drug Enforcement Administration, an agency typically tasked with enforcing federal drug-related laws, the authority to “conduct covert surveillance” on civilians as part of the government’s efforts to quell the protests. As one of the most tech savvy government agencies, it has access to billions of domestic phone records, cell site simulators, and, like many other federal agencies, facial recognition technology.

It’s in part because of this intense surveillance that protesters fear they could face retaliation.

But in the past week, developers have rushed to build apps and tools that let protesters scrub hidden metadata from their photos, and mask or blur faces to prevent facial recognition systems from identifying protesters.

Everest Pipkin built a web app that strips images of their metadata and lets users blur faces — or mask faces completely, making it more difficult for neural networks to reverse blurring. The web app runs entirely in the browser and doesn’t upload or store any data. They also open-sourced the code, allowing anyone to download and run the app on their own offline device.

i built a tool for quickly scrubbing metadata from images and selectively blurring faces and identifiable features. it runs on a phone or computer, and doesn’t send info anywhere.

process your images so that you and others are safe:https://t.co/GbQu5ZweDq pic.twitter.com/jKjABTgPRX

— everest (@everestpipkin) May 31, 2020

Pipkin is one of a few developers who have rushed to help protesters protect their privacy.

“I saw a bunch of discourse about how law enforcement is aggregating videos of the protests from social media to identify protesters,” developer Sam Loeschen told TechCrunch. He built Censr, a virtual reality app that works on the iPhone XR and later, which masks and pixelates photos in real-time.

The app also scrubs images of metadata, making it more difficult to identify the source and the location of the masked image. Loeschen said it was an “really easy weekend project.” It’s currently in beta.

📣📣 Announcing censr: a simple camera app for protecting your identity!

available for iPhone XR and up

distributing to protestors and press through TestFlight. Send me a DM for the link! pic.twitter.com/J1Znd2ZKqN

— Sam Loeschen (@polygone_) June 5, 2020

Noah Conk built an iPhone Shortcut that uses Amazon’s facial recognition system and automatically blurs any faces it detects. Conk said in a tweet there was no way to blur images on the device but that he does not save the image.

The idea is smart, but it does mean any photos uploaded could theoretically (and if stored) be obtained by law enforcement with a legal order. You also need to “allow untrusted shortcuts”, which could open the door to potentially malicious shortcuts. Know the risks before allowing untrusted shortcuts, and keep it disabled when you don’t need it.

Helping protesters and others blur and anonymize photos is an idea that’s taking off.

Just this week, end-to-end encrypted messaging app Signal included its own photo blurring feature, one that couldn’t come soon enough as its user base spiked thanks to the massive adoption since the protests started.

Signal founder Moxie Marlinspike said in a blog post that the move was to help “support everyone in the streets,” including those protesting in the U.S. and around the world, in many cases defying social distancing rules by governments put in place to slow the spread of the coronavirus pandemic.

“One immediate thing seems clear: 2020 is a pretty good year to cover your face,” said Marlinspike.

Hackers release a new jailbreak that unlocks every iPhone

A renowned iPhone hacking team has released a new “jailbreak” tool that unlocks every iPhone, even the most recent models running the latest iOS 13.5.

For as long as Apple has kept up its “walled garden” approach to iPhones by only allowing apps and customizations that it approves, hackers have tried to break free from what they call the “jail,” hence the name “jailbreak.” Hackers do this by finding a previously undisclosed vulnerability in iOS that break through some of the many restrictions that Apple puts in place to prevent access to the underlying software. Apple says it does this for security. But jailbreakers say breaking through those restrictions allows them to customize their iPhones more than they would otherwise, in a way that most Android users are already accustomed to.

The jailbreak, released by the unc0ver team, supports all iPhones that run iOS 11 and above, including up to iOS 13.5, which Apple released this week.

Details of the vulnerability that the hackers used to build the jailbreak aren’t known, but it’s not expected to last forever. Just as jailbreakers work to find a way in, Apple works fast to patch the flaws and close the jailbreak.

Security experts typically advise iPhone users against jailbreaking, because breaking out of the “walled garden” vastly increases the surface area for new vulnerabilities to exist and to be found.

The jailbreak comes at a time where the shine is wearing off of Apple’s typically strong security image. Last week, Zerodium, a broker for exploits, said it would no longer buy certain iPhone vulnerabilities because there were too many of them. Motherboard reported this week that hackers got their hands on a pre-release version of the upcoming iOS 14 release several months ago.

Ransomware Attack: Everything You Need To Know To Stop It

All businesses — whether they are large corporations, governments, or small organizations — are always at a high risk of being attacked by hackers and cybercriminals aiming to steal your money. According to a report in 2019, cyberattacks have an increase of 235%.

Malware and phishing are some of the most common types of cyberattacks, but which could be more dangerous? The answer is Ransomware attack. Ransomware is a type of malware threat that is on the rise in the cyber world. So, no matter the size of your business, it is essential to know how to prevent your business from ransomware. But, before stopping Ransomware attacks, you need to know everything about them.

This article will teach you all there is to know about Ransom prevention. We explore what is Ransomware attack, what types of this attack are, why Ransomware is so effective, and how to protect your data from ransomware attacks.

What is Ransomware Attack?

Ransomware is one of the biggest threats that cybersecurity teams around the world are facing. This type of malware erases all data until a ransom is paid. From small teams to large enterprises and government networks, this advanced form of cyberattack targets all organizations. This alone can be damaging to a network and result in fatal loss to infrastracture.

One of the most famous examples of ransomware is the Wolverine ransomware attack. The Wolverine Solutions Group was hit by a ransomware attack in September 2018. Malware encrypted many of the files of the organization, leaving workers unable to access them.

Fortunately, forensics experts were able to decrypt and restore them on October 3. However, lots of patient data was compromised as a result of the attack.

Why is Ransomware So Effective?

ransomware attacks

Ransomware attacks can be very damaging to all types of organizations, causing loss of productivity and finances. Most obviously, the loss of data may represent hundreds of hours of work or customer data that is essential to the smooth running of your organization. Ransomware generates over $25 million in revenue for hackers each year, which shows how effective it is to extort money from organizations. Let’s take a look at a few reasons why Ransomware is so effective.

Ransomware Targets Human Weaknesses

By targeting businesses with phishing attacks, hackers can bypass traditional security technologies with ransomware. In the security infrastructure of many businesses, email is a weak point, and hackers can ruin this by using ransomware. Hackers can also target human error by using trojan horse viruses. Lack of awareness about cybersecurity threats is the major issue here. Many people are unaware of what threats look like and whether they should avoid downloading or opening files in emails. This lack of security awareness is the main reason why ransomware is spreading much more quickly than other threats.

Lack of Strong Technological Defenses

One of the main reasons why ransomware is so effective is the lack of strong technological defenses. Several companies lack the strength as far as their network defense is concerned. Thus, they fail to block these attacks since such security shields can be costly and complex to utilize. Other times, it is tough for the IT department to persuade company higher-ups to invest in strong security defense until it becomes too late.

Outdated Hardware and Software

Besides not having strong defenses against ransomware threats, many organizations also depend too heavily on outdated hardware and software. Over time, hackers discover security vulnerabilities. Technology companies often push out security updates, but many organizations have no way to verify that users are installing these updates. Apart from this, many companies rely heavily on older computers that are no longer supported and hence, they are open to vulnerabilities. This is one of the main reasons why ransomware attacks have become the most popular and disruptive threats.

How To Prevent Ransomware Attacks?

Want to prevent your data or files from being held to ransom? If yes, the best way to stop ransomware attacks is to be proactive in your security approach and ensure that you have strong protections in place before ransomware can infect your systems. Here are some tips to stop ransomware attacks.

Strong, Reputable Endpoint Antivirus Security

ransomware attack security

Having a very strong endpoint security solution is one of the most important ways to stop ransomware threats. Installing security shields on your endpoint devices will effectively impede any malware from corrupting your systems, data, or files. These solutions can protect malicious downloads and can alert users when they are visiting risky websites. As cyber criminals always try to create new pieces of malware, these systems are not guaranteed to be 100% effective. However, installing endpoint security is an essential step in strong protection against ransomware threats.

Email Security, Inside and Outside the Gateway

As ransomware threats are commonly delivered via email, email security gateway is the best way to stop these attacks. Secure email gateway technologies identify threats and block them from being delivered to users by filtering email communications with URL defenses. These gateway technologies can prevent ransomware from arriving on endpoint devices and block users from unintentionally installing ransomware onto their device.

Data Backup and Recovery

Is the ransomware successfully installed in your system? Are your data compromised? If so, you need to restore the data you need quickly and minimize the downtime to protect your organization. Besides, you can ensure your data are backed up in multiple places including in your main storage area, on lock disks, and in a cloud continuity service.

In the case of cyber attacks, the best Cloud Data Backup and Recovery platform allows businesses to recover data. This is known as an important tool for remediating against ransomware threats.

See Also: How Backup And Storage Led To Cybercrime

Don’t Let Ransomware Damage Your Organization

As you can see, ransomware is a type of malware which criminals use to steal money, and for a few years, this ransomware has been spreading quickly. So, if you don’t want to face any productivity and financial losses due to these threats in the future, you need to follow some tips. With the aforementioned information and by following the above steps, you can protect your organization against damaging ransomware attacks.

The post Ransomware Attack: Everything You Need To Know To Stop It appeared first on Dumb Little Man.

Meet EventBot, a new Android malware that steals banking passwords and two-factor codes

Security researchers are sounding the alarm over a newly discovered Android malware that targets banking apps and cryptocurrency wallets.

The malware, which researchers at security firm Cybereason recently discovered and called EventBot, masquerades as a legitimate Android app — like Adobe Flash or Microsoft Word for Android — which abuses Android’s in-built accessibility features to obtain deep access to the device’s operating system.

Once installed — either by an unsuspecting user or by a malicious person with access to a victim’s phone — the EventBot-infected fake app quietly siphons off passwords for more than 200 banking and cryptocurrency apps — including PayPal, Coinbase, CapitalOne and HSBC — and intercepts and two-factor authentication text message codes.

With a victim’s password and two-factor code, the hackers can break into bank accounts, apps and wallets, and steal a victim’s funds.

“The developer behind Eventbot has invested a lot of time and resources into creating the code, and the level of sophistication and capabilities is really high,” Assaf Dahan, head of threat research at Cybereason, told TechCrunch.

The malware quietly records every tap and key press, and can read notifications from other installed apps, giving the hackers a window into what’s happening on a victim’s device.

Over time, the malware siphons off banking and cryptocurrency app passwords back to the hackers’ server.

The researchers said that EventBot remains a work in progress. Over a period of several weeks since its discovery in March, the researchers saw the malware iteratively update every few days to include new malicious features. At one point the malware’s creators improved the encryption scheme it uses to communicate with the hackers’ server, and included a new feature that can grab a user’s device lock code, likely to allow the malware to grant itself higher privileges to the victim’s device like payments and system settings.

But while the researchers are stumped as to who is behind the campaign, their research suggests the malware is brand new.

“Thus far, we haven’t observed clear cases of copy-paste or code reuse from other malware and it seems to have been written from scratch,” said Dahan.

Android malware is not new, but it’s on the rise. Hackers and malware operators have increasingly targeted mobile users because many device owners have their banking apps, social media, and other sensitive services on their device. Google has improved Android security in recent years by screening apps in its app store and proactively blocking third-party apps to cut down on malware — with mixed results. Many malicious apps have evaded Google’s detection.

Cybereason said it has not yet seen EventBot on Android’s app store or in active use in malware campaigns, limiting the exposure to potential victims — for now.

But the researchers said users should avoid untrusted apps from third-party sites and stores, many of which don’t screen their apps for malware.

Zoom admits some calls were routed through China by mistake

Hours after security researchers at Citizen Lab reported that some Zoom calls were routed through China, the video conferencing platform has offered an apology and a partial explanation.

To recap, Zoom has faced a barrage of headlines this week over its security policies and privacy practices, as hundreds of millions forced to work from home during the coronavirus pandemic still need to communicate with each other.

The latest findings landed earlier today when Citizen Lab researchers said that some calls made in North America were routed through China — as were the encryption keys used to secure those calls. But as was noted this week, Zoom isn’t end-to-end encrypted at all, despite the company’s earlier claims, meaning that Zoom controls the encryption keys and can therefore access the contents of its customers’ calls. Zoom said in an earlier blog post that it has “implemented robust and validated internal controls to prevent unauthorized access to any content that users share during meetings.” The same can’t be said for Chinese authorities, however, which could demand Zoom turn over any encryption keys on its servers in China to facilitate decryption of the contents of encrypted calls.

Zoom now says that during its efforts to ramp up its server capacity to accommodate the massive influx of users over the past few weeks, it “mistakenly” allowed two of its Chinese data centers to accept calls as a backup in the event of network congestion.

From Zoom’s CEO Eric Yuan:

During normal operations, Zoom clients attempt to connect to a series of primary datacenters in or near a user’s region, and if those multiple connection attempts fail due to network congestion or other issues, clients will reach out to two secondary datacenters off of a list of several secondary datacenters as a potential backup bridge to the Zoom platform. In all instances, Zoom clients are provided with a list of datacenters appropriate to their region. This system is critical to Zoom’s trademark reliability, particularly during times of massive internet stress.”

In other words, North American calls are supposed to stay in North America, just as European calls are supposed to stay in Europe. This is what Zoom calls its data center “geofencing.” But when traffic spikes, the network shifts traffic to the nearest data center with the most available capacity.

China, however, is supposed to be an exception, largely due to privacy concerns among Western companies. But China’s own laws and regulations mandate that companies operating on the mainland must keep citizens’ data within its borders.

Zoom said in February that “rapidly added capacity” to its Chinese regions to handle demand was also put on an international whitelist of backup data centers, which meant non-Chinese users were in some cases connected to Chinese servers when data centers in other regions were unavailable.

Zoom said this happened in “extremely limited circumstances.” When reached, a Zoom spokesperson did not quantify the number of users affected.

Zoom said that it has now reversed that incorrect whitelisting. The company also said users on the company’s dedicated government plan were not affected by the accidental rerouting.

But some questions remain. The blog post only briefly addresses its encryption design. Citizen Lab criticized the company for “rolling its own” encryption — otherwise known as building its own encryption scheme. Experts have long rejected efforts by companies to build their own encryption, because it doesn’t undergo the same scrutiny and peer review as the decades-old encryption standards we all use today.

Zoom said in its defense that it can “do better” on its encryption scheme, which it says covers a “large range of use cases.” Zoom also said it was consulting with outside experts, but when asked, a spokesperson declined to name any.

Bill Marczak, one of the Citizen Lab researchers that authored today’s report, told TechCrunch he was “cautiously optimistic” about Zoom’s response.

“The bigger issue here is that Zoom has apparently written their own scheme for encrypting and securing calls,” he said, and that “there are Zoom servers in Beijing that have access to the meeting encryption keys.”

“If you’re a well-resourced entity, obtaining a copy of the internet traffic containing some particularly high-value encrypted Zoom call is perhaps not that hard,” said Marcak.

“The huge shift to platforms like Zoom during the COVID-19 pandemic makes platforms like Zoom attractive targets for many different types of intelligence agencies, not just China,” he said. “Fortunately, the company has (so far) hit all the right notes in responding to this new wave of scrutiny from security researchers, and have committed themselves to make improvements in their app.”

Zoom’s blog post gets points for transparency. But the company is still facing pressure from New York’s attorney general and from two class-action lawsuits. Just today, several lawmakers demanded to know what it’s doing to protect users’ privacy.

Will Zoom’s mea culpas be enough?

Hackers are targeting other hackers by infecting their tools with malware

A newly discovered malware campaign suggests that hackers have themselves become the targets of other hackers, who are infecting and repackaging popular hacking tools with malware.

Cybereason’s Amit Serper found that the attackers in this years-long campaign are taking existing hacking tools — some of which are designed to exfiltrate data from a database through to cracks and product key generators that unlock full versions of trial software — and injecting a powerful remote-access trojan. When the tools are opened, the hackers gain full access to the target’s computer.

Serper said the attackers are “baiting” other hackers by posting the repackaged tools on hacking forums.

But it’s not just a case of hackers targeting other hackers, Serper told TechCrunch. These maliciously repackaged tools are not only opening a backdoor to the hacker’s systems, but also any system that the hacker has already breached.

“If hackers are targeting you or your business and they are using these trojanized tools it means that whoever is hacking the hackers will have access to your assets as well,” Serper said.

That includes offensive security researchers working on red team engagements, he said.

Serper found that these as-yet-unknown attackers are injecting and repackaging the hacking tools with njRat, a powerful trojan, which gives the attacker full access to the target’s desktop, including files, passwords, and even access to their webcam and microphone. The trojan dates back to at least 2013 when it was used frequently against targets in the Middle East. njRat often spreads through phishing emails and infected flash drives, but more recently hackers have injected the malware on dormant or insecure websites in an effort to evade detection. In 2017, hackers used this same tactic to host malware on the website for the so-called Islamic State’s propaganda unit.

Serper found the attackers were using that same website-hacking technique to host njRat in this most recent campaign.

According to his findings, the attackers compromised several websites — unbeknownst to their owners — to host hundreds of njRat malware samples, as well as the infrastructure used by the attackers to command and control the malware. Serper said that the process of injecting the njRat trojan into the hacking tools occurs almost daily and may be automated, suggesting that the attacks are run largely without direct human interaction.

It’s unclear for what reason this campaign exists or who is behind it.

Microsoft will now pay up to $20k for Xbox Live security exploits

Think you’ve found a glaring security hole in Xbox Live? Microsoft is interested.

The company announced a new bug bounty program today, focused specifically on its Xbox Live network and services. Depending on how serious the exploit is and how complete your report is, they’re paying up to $20,000.

Like most bug bounty programs, Microsoft is looking for pretty specific/serious security flaws here. Found a way to execute unauthorized code on Microsoft’s servers? They’ll pay for that. Keep getting disconnected from Live when you play as a certain legend in Apex? Not quite the kind of bug they’re looking for.

Microsoft also specifically rules out a few types of vulnerabilities as out-of-scope, including DDoS attacks, anything that involves phishing Microsoft employees or Xbox customers, or getting servers to cough up basic info like server name or internal IP. You can find the full breakdown here.

This is by no means Microsoft’s first foray into bounty programs; they’ve got similar programs for the Microsoft Edge browser, their “Windows Insider” preview builds, Office 365, and plenty of other categories. The biggest bounties they offer are on their cloud computing service, Azure, where the bounty for a super specific bug (gaining admin access to an Azure Security Lab account, which are closely controlled) can net up to $300,000.

Are Biometrics The Future Of Security?

Biometric security is rapidly emerging into mobile technology and today, 57% of apps feature a biometric login option.

Biometric security uses physical and behavioral markers to identify authorized users and detect impostors. 46% of Americans use biometrics because they are more secure and 70% of Americans say biometrics are easier than traditional security.

Since 2013, the global biometrics market has risen to become a $14-billion industry. On top of using biometrics to secure their devices, biometric payments have become the next hot thing with consumers.

About 86% of Americans choose biometrics in verifying their identities or approving payments.  With that, it’s easy to find yourself asking if the safety of biometrics is better than passwords and PINs.

Thanks to the rise in biometric payments, 48% of Americans have used this technology.

To paint a better picture, imagine approving an Apple Pay, Venmo, Cash App or Google Pay transaction with your fingerprint.

Convenient and safe, right?

That’s the same reason why about 42% of Americans refuse to use banking apps that don’t have biometric authentication. 63% say they prefer this technology when physically shopping, too.

Interestingly enough, 80% of those with iPhones use biometrics, 25% of Android users, 12% of laptop users, and 11% of tablet users do the same. Regarding preference, 63% of Americans default to fingerprint scanners, 14% prefer facial recognition, 8% prefer the old ways of doing things, and 2% use voice recognition as often as possible.

Biometrics don’t just rely on your physical traits.

The technology also analyzes your behaviors for authentication. Your face, fingerprints, retinas, and voice are among the physical identifiers mentioned in the definition.

How, when, and where you use your device, how you hold it, how you move, and how frequently you use your device are examples of behavioral identifiers embedded into biometric security.

biometrics safety

46 of Americans feel biometrics are more secure.

So, what makes them so tough to hack?

For one, biometric security isn’t standardized. Each device requires a unique approach to use. It requires a unique approach to hacking.

This means biometrics take far longer to hack into than passwords. With it being difficult to go unnoticed in biometric hacking attempts, hackers must act wisely. Creating a fake to dupe the biometric system is possible but requires large amounts of user data — despite what you’ve seen in the movies.

On this note, it’s undeniable that it looks easy to hack this type of technology in the movies. Sean Connery was able to fool a scan in the 1971 movie Diamonds Are Forever and Ethan Hawke bypassed a blood test in Gattaca (1997).

Real-life instances of duping biometrics have occurred.

Let’s discuss:

Masks can be used to trick facial recognition biometric software, unlocking a device or granting access to information/applications. A cybersecurity firm in Vietnam called Bkav used a 3-D printed mask, paper tape, and silicone to crack facial recognition. Siblings, a mother, a son, and even distant cousins have also been able to unlock each other’s iPhones using Face ID.

Photos can also do the trick literally.

Certain Android devices have been shown to be fooled by just holding up a photo or another device showing a photo. This includes devices from manufacturers including Motorola, Samsung, Huawei, and Sony.

Fingerprints — what 63% prefer to register into biometrics — can also be faked. The latest top-of-the-line Samsung smartphone boasts an ultrasonic fingerprint sensor. While this technology is hailed as less hackable than other similar technologies, 3-D printed fingerprints have been shown to do the trick.

safety of biometrics fingerprint

Here is a scenario in which biometric sensors can be easily hacked:

When your face doesn’t unlock your phone, this raises a prompt for the user password. A correct code prompts the software to update its facial metrics. Unfortunately, if someone, such as a family member or close relative, looks like you and aware of your password, the software could eventually be programmed to recognize them instead of you.

See Also: 6 Foolproof Tips for Creating Powerful Passwords

Final Words

Although the safety of biometrics is  better than traditional passwords, using the technology doesn’t make your device bulletproof. Using 2-step authentication, choosing the best technology, knowing how your security fails, and proper supervision can help you maximize your security.

To read more on biometric security, check out the infographic below.
Biometric Security
Source: Computer Science Zone

The post Are Biometrics The Future Of Security? appeared first on Dumb Little Man.

Mixcloud data breach exposes over 20 million user records

A data breach at Mixcloud, a U.K.-based audio streaming platform, has left more than 20 million user accounts exposed after the data was put on sale on the dark web.

The data breach happened earlier in November, according to a dark web seller who supplied a portion of the data to TechCrunch, allowing us to examine and verify the authenticity of the data.

The data contained usernames, email addresses, and passwords that appear to be scrambled with the SHA-2 algorithm, making the passwords near impossible to unscramble. The data also contained account sign-up dates and the last-login date. It also included the country from which the user signed up, their internet (IP) address, and links to profile photos.

We verified a portion of the data by validating emails against the site’s sign-up feature, though Mixcloud does not require users to verify their email addresses.

The exact amount of data stolen isn’t known. The seller said there were 20 million records, but listed 21 million records on the dark web. But the data we sampled suggested there may have been as many as 22 million records based off unique values in the data set we were given.

The data was listed for sale for $4,000, or about 0.5 bitcoin. We’re not linking to the dark web listing.

Mixcloud last year secured a $11.5 million cash injection from media investment firm WndrCo, led by Hollywood media proprietor Jeffrey Katzenberg.

It’s the latest in a string of high profile data breaches in recent months. The breached data came from the same dark web seller who also alerted TechCrunch to the StockX breach earlier this year. The apparel trading company initially claimed its customer-wide password reset was for “system updates,” but later came clean, admitting it was hacked, exposing more than four million records, after TechCrunch obtained a portion of the breached data.

When reached, Mixcloud spokesperson Lisa Roolant did not comment beyond a boilerplate corporate statement, nor did the spokesperson answer any of our questions — including if the company planned to inform regulators under U.S. state and EU data breach notification laws.

Co-founder Nico Perez also declined to comment further.

As a London-based company, Mixcloud falls under U.K. and European data protection rules. Companies can be fined up to 4% of their annual turnover for violations of European GDPR rules.

Corrected the fourth paragraph to clarify that emails were validated against the site’s sign-up feature, and not the password reset feature. Updated to include comment from the company.

Read more:

The Future (And History) of Phishing And Email Security

Not that long ago, the only way to communicate with someone across the office was to get up and walk over. Then, it became calling one’s phones with individual extensions being widely used. Eventually, those phone lines were used to link computers together and someone got the idea that you could send messages to a specific person in a network. That was when email was born.

Because computer networks were so small and used by few people, email was not built with security in mind. The thought that one day there would be more than 4.3 billion email addresses worldwide never occurred to anyone. This oversight first led to spam and then email phishing.

How can understanding the evolution of email help us to understand how to fight back against phishing and scam?

The History of Email

future of phishing and spam

MIT developed Compatible Time Sharing in the mid-1960s. It allowed users to log in to a terminal and access files from a shared server remotely. ARPANET joined together a series of networks to create the first intranet, the predecessor to the internet.

The @ symbol was introduced to send messages to a specific user, the predecessor to modern-day email. In 1976, Queen Elizabeth became the first Head of State to send an email.

It wasn’t until 1977 that the standard email format we know today – with fields for ‘To’ and ‘From’, as well as the ability to forward emails, was developed.

The Birth of Spam

Just a year after email was developed, Gary Thuerk got the idea to send a mass message to everyone in the ARPANET network – all 397 of them. The mass email was about a presentation at a hotel.

The move was so wildly unpopular that no one would try to send such an email again for over a decade.

Mass emails only became a method of attack in 1988. It was the time when online gamers sent massive amounts of email to rival players in order to crash their systems and render them unable to play the game.

It was in 1993 that unwanted emails were called ‘spam’. It’s a name that was chosen as an homage to the Monty Python skit about a character’s dislike of the canned meat of the same name.

The second attempt at mass marketing spam emails took place in 1994.

From Spam To Scam

By the 1990s, scammers had found a way to capitalize on all those unwanted emails landing in inboxes. Sending their own mass emails that contained malicious links or phishing attempts blended right in.

They would pose as system administrators and pretend that there was a problem with a person’s account. They would try to gain access to their login credentials and then send more dangerous emails to the people in that account’s contact list.

In 1996, the term ‘phishing’ was coined. It was after a series of attacks on an AOL message board involving someone asking if anyone knew ways to gain access to the internet for free.

Email attacks became more frequent and more damaging. The ILOVEYOU virus infected 45 million PCs after unsuspecting users opened emails and unknowingly downloaded and forwarded computer worms. Later, the Sircam virus infected one in 20 PCs, causing them to lose critical operating system files.

By 2002, both the U.S. and the E.U. had passed laws prohibiting people from sending marketing emails unless the recipient had previously expressed consent to receive them. Unfortunately, these laws have proven to be largely ineffective.

Modern Email Challenges Need Modern Solutions

the future of phishing and spam

Because the way we use email has changed so much, securing communications now requires newer and more advanced tactics. There’s yet another form of phishing to contend with these days. It’s called smishing in which phishing messages are sent through spam text messages.

With each new attack comes new security challenges.

Now that email is mostly in the cloud, that means security needs to be there as well. Today, more than a quarter of those online have been affected by data stolen from the cloud and there are more than 4.7 billion phishing emails sent every day. Cloud-based activities require a new level of security.

See Also: The Cost of Email Phishing

Learn more about the history and future of phishing and spam below.

 

The History and Future of Phishing [infographic]
Courtesy of Avanan

The post The Future (And History) of Phishing And Email Security appeared first on Dumb Little Man.

The Future (And History) of Phishing And Email Security

Not that long ago, the only way to communicate with someone across the office was to get up and walk over. Then, it became calling one’s phones with individual extensions being widely used. Eventually, those phone lines were used to link computers together and someone got the idea that you could send messages to a specific person in a network. That was when email was born.

Because computer networks were so small and used by few people, email was not built with security in mind. The thought that one day there would be more than 4.3 billion email addresses worldwide never occurred to anyone. This oversight first led to spam and then email phishing.

How can understanding the evolution of email help us to understand how to fight back against phishing and scam?

The History of Email

future of phishing and spam

MIT developed Compatible Time Sharing in the mid-1960s. It allowed users to log in to a terminal and access files from a shared server remotely. ARPANET joined together a series of networks to create the first intranet, the predecessor to the internet.

The @ symbol was introduced to send messages to a specific user, the predecessor to modern-day email. In 1976, Queen Elizabeth became the first Head of State to send an email.

It wasn’t until 1977 that the standard email format we know today – with fields for ‘To’ and ‘From’, as well as the ability to forward emails, was developed.

The Birth of Spam

Just a year after email was developed, Gary Thuerk got the idea to send a mass message to everyone in the ARPANET network – all 397 of them. The mass email was about a presentation at a hotel.

The move was so wildly unpopular that no one would try to send such an email again for over a decade.

Mass emails only became a method of attack in 1988. It was the time when online gamers sent massive amounts of email to rival players in order to crash their systems and render them unable to play the game.

It was in 1993 that unwanted emails were called ‘spam’. It’s a name that was chosen as an homage to the Monty Python skit about a character’s dislike of the canned meat of the same name.

The second attempt at mass marketing spam emails took place in 1994.

From Spam To Scam

By the 1990s, scammers had found a way to capitalize on all those unwanted emails landing in inboxes. Sending their own mass emails that contained malicious links or phishing attempts blended right in.

They would pose as system administrators and pretend that there was a problem with a person’s account. They would try to gain access to their login credentials and then send more dangerous emails to the people in that account’s contact list.

In 1996, the term ‘phishing’ was coined. It was after a series of attacks on an AOL message board involving someone asking if anyone knew ways to gain access to the internet for free.

Email attacks became more frequent and more damaging. The ILOVEYOU virus infected 45 million PCs after unsuspecting users opened emails and unknowingly downloaded and forwarded computer worms. Later, the Sircam virus infected one in 20 PCs, causing them to lose critical operating system files.

By 2002, both the U.S. and the E.U. had passed laws prohibiting people from sending marketing emails unless the recipient had previously expressed consent to receive them. Unfortunately, these laws have proven to be largely ineffective.

Modern Email Challenges Need Modern Solutions

the future of phishing and spam

Because the way we use email has changed so much, securing communications now requires newer and more advanced tactics. There’s yet another form of phishing to contend with these days. It’s called smishing in which phishing messages are sent through spam text messages.

With each new attack comes new security challenges.

Now that email is mostly in the cloud, that means security needs to be there as well. Today, more than a quarter of those online have been affected by data stolen from the cloud and there are more than 4.7 billion phishing emails sent every day. Cloud-based activities require a new level of security.

See Also: The Cost of Email Phishing

Learn more about the history and future of phishing and spam below.

 

The History and Future of Phishing [infographic]
Courtesy of Avanan

The post The Future (And History) of Phishing And Email Security appeared first on Dumb Little Man.

More than 1 million T-Mobile customers exposed by breach

T-Mobile has confirmed a data breach affecting more than a million of its customers, whose personal data (but no financial or password data) was exposed to a malicious actor. The company alerted the affected customers but did not provide many details in its official account of the hack.

The company said in its disclosure to affected users that its security team had shut down “malicious, unauthorized access” to prepaid data customers. The data exposed appears to have been:

  • Name
  • Billing address
  • Phone number
  • Account number
  • Rate, plan and calling features (such as paying for international calls)

The latter data is considered “customer proprietary network information” and under telecoms regulations they are required to notify customers if it is leaked. The implication seems to be that they might not have done so otherwise. Of course some hacks, even hacks of historic magnitude, go undisclosed sometimes for years.

In this case, however, it seems that T-Mobile has disclosed the hack in a fairly prompt manner, though it provided very few details. When I asked, a T-Mobile representative indicated that “less than 1.5 percent” of customers were affected, which of the company’s approximately 75 million users adds up to somewhat over a million.

The company reports that “we take the security of your information very seriously,” a canard we’ve asked companies to stop saying in these situations.

The T-Mobile representative stated that the attack was discovered in early November and shut down “immediately.” They did not answer other questions I asked, such as whether it was on a public-facing or internal website or database, how long the data was exposed and what specifically the company had done to rectify the problem.

The data listed above is not necessarily highly damaging on its own, but it’s the kind of data with which someone might attempt to steal your identity or take over your account. Account hijacking is a fairly common tactic among cyber-ne’er-do-wells these days and it helps to have details like the target’s plan, home address and so on at one’s fingertips.

If you’re a T-Mobile customer, it may be a good idea to change your password there and check up on your account details.

What you missed in cybersecurity this week

There’s not a week that goes by where cybersecurity doesn’t dominates the headlines. This week was no different. Struggling to keep up? We’ve collected some of the biggest cybersecurity stories from the week to keep you in the know and up to speed.

Malicious websites were used to secretly hack into iPhones for years, says Google

TechCrunch: This was the biggest iPhone security story of the year. Google researchers found a number of websites that were stealthily hacking into thousands of iPhones every week. The operation was carried out by China to target Uyghur Muslims, according to sources, and also targeted Android and Windows users. Google said it was an “indiscriminate” attack through the use of previously undisclosed so-called “zero-day” vulnerabilities.

Hackers could steal a Tesla Model S by cloning its key fob — again

Wired: For the second time in two years, researchers found a serious flaw in the key fobs used to unlock Tesla’s Model S cars. It’s the second time in two years that hackers have successfully cracked the fob’s encryption. Turns out the encryption key was doubled in size from the first time it was cracked. Using twice the resources, the researchers cracked the key again. The good news is that a software update can fix the issue.

Microsoft’s lead EU data watchdog is looking into fresh Windows 10 privacy concerns

TechCrunch: Microsoft could be back in hot water with the Europeans after the Dutch data protection authority asked its Irish counterpart, which oversees the software giant, to investigate Windows 10 for allegedly breaking EU data protection rules. A chief complaint is that Windows 10 collects too much telemetry from its users. Microsoft made some changes after the issue was brought up for the first time in 2017, but the Irish regulator is looking at if these changes go far enough — and if users are adequately informed. Microsoft could be fined up to 4% of its global annual revenue if found to have flouted the law. Based off 2018’s figures, Microsoft could see fines as high as $4.4 billion.

U.S. cyberattack hurt Iran’s ability to target oil tankers, officials say

The New York Times: A secret cyberattack against Iran in June but only reported this week significantly degraded Tehran’s ability to track and target oil tankers in the region. It’s one of several recent offensive operations against a foreign target by the U.S. government in recent moths. Iran’s military seized a British tanker in July in retaliation over a U.S. operation that downed an Iranian drone. According to a senior official, the strike “diminished Iran’s ability to conduct covert attacks” against tankers, but sparked concern that Iran may be able to quickly get back on its feet by fixing the vulnerability used by the Americans to shut down Iran’s operation in the first place.

Apple is turning Siri audio clip review off by default and bringing it in house

TechCrunch: After Apple was caught paying contractors to review Siri queries without user permission, the technology giant said this week it will turn off human review of Siri audio by default and bringing any opt-in review in-house. That means users actively have to allow Apple staff to “grade” audio snippets made through Siri. Apple began audio grading to improve the Siri voice assistant. Amazon, Facebook, Google, and Microsoft have all been caught out using contractors to review user-generated audio.

Hackers are actively trying to steal passwords from two widely used VPNs

Ars Technica: Hackers are targeting and exploiting vulnerabilities in two popular corporate virtual private network (VPN) services. Fortigate and Pulse Secure let remote employees tunnel into their corporate networks from outside the firewall. But these VPN services contain flaws which, if exploited, could let a skilled attacker tunnel into a corporate network without needing an employee’s username or password. That means they can get access to all of the internal resources on that network — potentially leading to a major data breach. News of the attacks came a month after the vulnerabilities in widely used corporate VPNs were first revealed. Thousands of vulnerable endpoints exist — months after the bugs were fixed.

Grand jury indicts alleged Capital One hacker over cryptojacking claims

TechCrunch: And finally, just when you thought the Capital One breach couldn’t get any worse, it does. A federal grand jury said the accused hacker, Paige Thompson, should be indicted on new charges. The alleged hacker is said to have created a tool to detect cloud instances hosted by Amazon Web Services with misconfigured web firewalls. Using that tool, she is accused of breaking into those cloud instances and installing cryptocurrency mining software. This is known as “cryptojacking,” and relies on using computer resources to mine cryptocurrency.

Malicious websites were used to secretly hack into iPhones for years, says Google

Security researchers at Google say they’ve found a number of malicious websites which, when visited, could quietly hack into a victim’s iPhone by exploiting a set of previously undisclosed software flaws.

Google’s Project Zero said in a deep-dive blog post published late on Thursday that the websites were visited thousands of times per week by unsuspecting victims, in what they described as an “indiscriminate” attack.

“Simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant,” said Ian Beer, a security researcher at Project Zero.

He said the websites had been hacking iPhones over a “period of at least two years.”

The researchers found five distinct exploit chains involving 12 separate security flaws, including seven involving Safari, the in-built web browser on iPhones. The five separate attack chains allowed an attacker to gain “root” access to the device — the highest level of access and privilege on an iPhone. In doing so, an attacker could gain access to the device’s full range of features normally off-limits to the user. That means an attacker could quietly install malicious apps to spy on an iPhone owner without their knowledge or consent.

Google said based off their analysis, the vulnerabilities were used to steal a user’s photos and messages as well as track their location in near-realtime. The “implant” could also access the user’s on-device bank of saved passwords.

The vulnerabilities affect iOS 10 through to the current iOS 12 software version.

Google privately disclosed the vulnerabilities in February, giving Apple only a week to fix the flaws and roll out updates to its users. That’s a fraction of the 90 days typically given to software developers, giving an indication of the severity of the vulnerabilities.

Apple issued a fix six days later with iOS 12.1.4 for iPhone 5s and iPad Air and later.

Beer said it’s possible other hacking campaigns are currently in action.

The iPhone and iPad maker in general has a good rap on security and privacy matters. Recently the company increased its maximum bug bounty payout to $1 million for security researchers who find flaws that can silently target an iPhone and gain root-level privileges without any user interaction. Under Apple’s new bounty rules — set to go into effect later this year — Google would’ve been eligible for several million dollars in bounties.

When reached, a spokesperson for Apple declined to comment.

Web host Hostinger says data breach may affect 14 million customers

Hostinger said it has reset user passwords as a “precautionary measure” after it detected unauthorized access to a database containing information on millions of its customers.

The breach is said to have happened on Thursday. The company said in a blog post it received an alert that one of its servers was improperly accessed. Using an access token found on the server, which can give access to systems without needing a username or a password, the hacker gained further access to the company’s systems, including an API database. That database contained customer usernames, email addresses, and passwords scrambled with the SHA-1 algorithm, which has been deprecated in favor of stronger algorithms after researchers found SHA-1 was vulnerable to spoofing. The company has since upgraded its password hashing to the stronger SHA-2 algorithm.

Hostinger said the API database stored about 14 million customers records. The company has more than 29 million customers on its books.

The company said it was “in contact with the respective authorities.”

hostinger

An email from Hostinger explaining the data breach. (Image: supplied)

News of the breach broke overnight. According to the company’s status page, affected customers have already received an email to reset their passwords.

The company said that financial data was not compromised, nor was customer website files or data affected.

But one customer who was affected by the breach accused the company of being potentially “misleading” about the scope of the breach.

A chat log seen by TechCrunch shows a customer support representative telling the customer it was “correct” that customers’ financial data can be retrieved by the API but that the company does “not store any payment data.” Hostinger uses multiple payment processors, the representative told the customer, but did not name them.

Chief executive Balys Kriksciunas told TechCrunch that the remarks made by the customer support representative were “misleading” and denied any customer financial data was compromised. A company investigation into the breach, however, remains under way.

Updated with remarks from Hostinger.

Related stories:

Tesla Model 3 owner implants RFID chip to turn her arm into a key

Forget the keycard or phone app, one software engineer is trying out a new way to unlock and start her Tesla Model 3.

Amie DD, who has a background in game simulation and programming, recently released a video showing how she “biohacked” her body. The software engineer removed the RFID chip from the Tesla Model 3 valet card using acetone, then placed it into a biopolymer, which was injected through a hollow needle into her left arm. A professional who specializes in body modifications performed the injection.

You can watch the process below, although folks who don’t like blood should consider skipping it. Amie DD also has a page on Hackaday.io that explains the project and the process.

The video is missing one crucial detail. It doesn’t show whether the method works. TechCrunch will update the post once a new video delivering the news is released.

Amie is not new to biohacking. The original idea was to use the existing RFID implant chip that was already in her hand to be able to start the Model 3. That method, which would have involved taking the Java applet and writing it onto her own chip, didn’t work because of Tesla’s security. So, Amie DD opted for another implant.

Amie DD explains why and how she did this in another, longer video posted below. She also talks a bit about her original implant in her left hand, which she says is used for “access control.” She uses it to unlock the door of her home, for instance.

 

 

How safe are school records? Not very, says student security researcher

If you can’t trust your bank, government or your medical provider to protect your data, what makes you think students are any safer?

Turns out, according to one student security researcher, they’re not.

Eighteen-year-old Bill Demirkapi, a recent high school graduate in Boston, Massachusetts, spent much of his latter school years with an eye on his own student data. Through self-taught pen testing and bug hunting, Demirkapi found several vulnerabilities in a his school’s learning management system, Blackboard, and his school district’s student information system, known as Aspen and built by Follett, which centralizes student data, including performance, grades, and health records.

The former student reported the flaws and revealed his findings at the Def Con security conference on Friday.

“I’ve always been fascinated with the idea of hacking,” Demirkapi told TechCrunch prior to his talk. “I started researching but I learned by doing,” he said.

Among one of the more damaging issues Demirkapi found in Follett’s student information system was an improper access control vulnerability, which if exploited could have allowed an attacker to read and write to the central Aspen database and obtain any student’s data.

Blackboard’s Community Engagement platform had several vulnerabilities, including an information disclosure bug. A debugging misconfiguration allowed him to discover two subdomains, which spat back the credentials for Apple app provisioning accounts for dozens of school districts, as well as the database credentials for most if not every Blackboard’s Community Engagement platform, said Demirkapi.

“School data or student data should be taken as seriously as health data. The next generation should be one of our number one priorities, who looks out for those who can’t defend themselves.”
Bill Demirkapi, security researcher

Another set of vulnerabilities could have allowed an authorized user — like a student — to carry out SQL injection attacks. Demirkapi said six databases could be tricked into disclosing data by injecting SQL commands, including grades, school attendance data, punishment history, library balances, and other sensitive and private data.

Some of the SQL injection flaws were blind attacks, meaning dumping the entire database would have been more difficult but not impossible.

In all, over 5,000 schools and over five million students and teachers were impacted by the SQL injection vulnerabilities alone, he said.

Demirkapi said he was mindful to not access any student records other than his own. But he warned that any low-skilled attacker could have done considerable damage by accessing and obtaining student records, not least thanks to the simplicity of the database’s password. He wouldn’t say what it was, only that it was “worse than ‘1234’.”

But finding the vulnerabilities was only one part of the challenge. Disclosing them to the companies turned out to be just as tricky.

Demirkapi admitted that his disclosure with Follett could have been better. He found that one of the bugs gave him improper access to create his own “group resource,” such as a snippet of text, which was viewable to every user on the system.

“What does an immature 11th grader do when you hand him a very, very, loud megaphone?” he said. “Yell into it.”

And that’s exactly what he did. He sent out a message to every user, displaying each user’s login cookies on their screen. “No worries, I didn’t steal them,” the alert read.

“The school wasn’t thrilled with it,” he said. “Fortunately, I got off with a two-day suspension.”

He conceded it wasn’t one of his smartest ideas. He wanted to show his proof-of-concept but was unable to contact Follett with details of the vulnerability. He later went through his school, which set up a meeting, and disclosed the bugs to the company.

Blackboard, however, ignored Demirkapi’s responses for several months, he said. He knows because after the first month of being ignored, he included an email tracker, allowing him to see how often the email was opened — which turned out to be several times in the first few hours after sending. And yet the company still did not respond to the researcher’s bug report.

Blackboard eventually fixed the vulnerabilities, but Demirkapi said he found that the companies “weren’t really prepared to handle vulnerability reports,” despite Blackboard ostensibly having a published vulnerability disclosure process.

“It surprised me how insecure student data is,” he said. “School data or student data should be taken as seriously as health data,” he said. “The next generation should be one of our number one priorities, who looks out for those who can’t defend themselves.”

He said if a teenager had discovered serious security flaws, it was likely that more advanced attackers could do far more damage.

Heather Phillips, a spokesperson for Blackboard, said the company appreciated Demirkapi’s disclosure.

“We have addressed several issues that were brought to our attention by Mr. Demirkapi and have no indication that these vulnerabilities were exploited or that any clients’ personal information was accessed by Mr. Demirkapi or any other unauthorized party,” the statement said. “One of the lessons learned from this particular exchange is that we could improve how we communicate with security researchers who bring these issues to our attention.”

Follet spokesperson Tom Kline said the company “developed and deployed a patch to address the web vulnerability” in July 2018.

The student researcher said he was not deterred by the issues he faced with disclosure.

“I’m 100% set already on doing computer security as a career,” he said. “Just because some vendors aren’t the best examples of good responsible disclosure or have a good security program doesn’t mean they’re representative of the entire security field.”

The Cost of Email Phishing

When did email become the weakest link? How can you protect your organization from email phishing attacks?

There have always been problems with people clicking on malicious links and somehow having them sent directly to you seems to make it more likely you will click on it.

One out of every 99 emails is a phishing scam which means that every employee in your organization is getting almost 5 phishing emails every workweek. Unfortunately, most people rely on their email program to filter out such messages.

Phishing Attacks Are Very Common — And Very Costly

Almost a third of phishing emails make it past default email security and 5% of those have been whitelisted by a system admin. There are several very common forms of phishing attacks:

  • 41% are credentialing attacks in which hackers try to gain access to the target’s usernames and passwords, costing $400 per account to clean up.
  • 51% of attacks are links that prompt the download of malware which can cause an average of $2.4 million in damage when successful
  • 0.4% of attacks are spearphishing attacks in which high-level people in an organization are targeted. While these are the least common attacks, they can be the most expensive, averaging $7.2 million per incident.
  • 8% of attacks are extortion attempts and when they are successful, they can cost an average of $5,000 per user.

Last year, 64% of information security professionals were targeted by spearphishing attacks while 35% of working professionals don’t even know what a phishing attack means. The cost of phishing comes in more than cleanup – it can also do serious reputational damage.

The average cost of a phishing attack on a midsized business is $1.6 million. There’s lost productivity while everyone tries to halt and undo the damage. There’s also a loss of proprietary data and perhaps the worst of all is the damage to a company’s reputation after a breach. A third of consumers will stop using a business once a breach has occurred and it could take years to recover from such an incident.

It’s Entirely Too Easy To Fall For The Bait

phishing attack

Even if you are in the 65% of working professionals who know what a phishing attack is, it’s still very easy to fall victim. Successful phishing campaigns play to our emotions and sense of urgency. They often feature subject lines designed to scare or cajole us into action.

Subject lines such as “complaint filed” or “open enrollment” make us believe there’s an action that needs to be taken immediately or something bad might happen. It may include losing our family’s health insurance or getting fired from our jobs.

It also doesn’t help that a quarter of phishing emails spoof trusted brands. When you are expecting a package from Amazon and happen to get an email from Amazon in your inbox, it might seem believable enough that you open it to see what’s going on.

The most common signs of phishing include:

  • Address of a crypto wallet
  • Link to a WordPress site
  • BCC to many others
  • Shortened URLs
  • From a trusted brand
  • Link to a file on Google Drive

Because these are all things that have legitimate uses, hackers can exploit them to make us think they are completely safe. Knowing the threat is the best way to avoid falling victim, but that may not be enough. If hackers weren’t so good at what they do, which is understanding human psychology, we would have no need for email scanning software.

It Helps To Have Backup

The existing spam filters in your email program catch a lot of the problems but not all of them. This lulls us into a false sense of security and leaves us believing that if something lands in our inboxes, it’s probably safe.

Unfortunately, this is just not the case. Learning how to avoid phishing attacks and schemes is crucial and it means reminding employees of these tactics on a regular basis. It can also help to get additional email scanning software to catch anything that looks real enough to be a threat.

Learn more about how email became the weakest link and how you can fight back from the infographic below.

How Email Became the Weakest Link [infographic]
Courtesy of Avanan

 

 

The post The Cost of Email Phishing appeared first on Dumb Little Man.

Capital One’s breach was inevitable, because we did nothing after Equifax

Another day, another massive data breach.

This time it’s the financial giant and credit card issuer Capital One, which revealed on Monday a credit file breach affecting 100 million Americans and 6 million Canadians. Consumers and small businesses affected are those who obtained one of the company’s credit cards dating back to 2005.

That includes names, addresses, phone numbers, dates of birth, self-reported income and more credit card application data — including over 140,000 Social Security numbers in the U.S., and more than a million in Canada.

The FBI already has a suspect in custody. Seattle resident and software developer Paige A. Thompson, 33, was arrested and detained pending trial. She’s been accused of stealing data by breaching a web application firewall, which was supposed to protect it.

Sound familiar? It should. Just last week, credit rating giant Equifax settled for more than $575 million over a date breach it had — and hid from the public for several months — two years prior.

Why should we be surprised? Equifax faced zero fallout until its eventual fine. All talk, much bluster, but otherwise little action.

Equifax’s chief executive Richard Smith “retired” before he was fired, allowing him to keep his substantial pension packet. Lawmakers grilled the company but nothing happened. An investigation launched by the former head of the Consumer Financial Protection Bureau, the governmental body responsible for protecting consumers from fraud, declined to pursue the company. The FTC took its sweet time to issue its fine — which amounted to about 20% of the company’s annual revenue for 2018. For one of the most damaging breaches to the U.S. population since the breach of classified vetting files at the Office of Personnel Management in 2015, Equifax got off lightly.

Legislatively, nothing has changed. Equifax remains as much of a “victim” in the eyes of the law as it was before — technically, but much to the ire of the millions affected who were forced to freeze their credit as a result.

Mark Warner, a Democratic senator serving Virginia, along with his colleague since turned presidential candidate Elizabeth Warren, was tough on the company, calling for it to do more to protect consumer data. With his colleagues, he called on the credit agencies to face penalties to the top brass and extortionate fines to hold the companies accountable — and to send a message to others that they can’t play fast and loose with our data again.

But Congress didn’t bite. Warner told TechCrunch at the time that there was “a failure of the company, but also of lawmakers” for not taking action.

Lo and behold, it happened again. Without a congressional intervention, Capital One is likely to face largely the same rigmarole as Equifax did.

Blame the lawmakers all you want. They had their part to play in this. But fool us twice, shame on the credit companies for not properly taking action in the first place.

The Equifax incident should have sparked a fire under the credit giants. The breach was the canary in the coal mine. We watched and waited to see what would happen as the canary’s lifeless body emerged — but, much to the American public’s chagrin, no action came of it. The companies continued on with the mentality that “it could happen to us, but probably won’t.” It was always going to happen again unless there was something to force the companies to act.

Companies continue to vacuum up our data — knowingly and otherwise — and don’t do enough to protect it. As much as we can have laws to protect consumers from this happening again, these breaches will continue so long as the companies continue to collect our data and not take their data security responsibilities seriously.

We had an opportunity to stop these kinds of breaches from happening again, yet in the two years passed we’ve barely grappled with the basic concepts of internet security. All we have to show for it is a meager fine.

Thompson faces five years in prison and a fine of up to $250,000.

Everyone else faces just another major intrusion into their personal lives. Not at the hands of the hacker per se, but the companies that collect our data — with our consent and often without — and take far too many liberties with it.

The Most Devastating Cyber Attack: How to Prevent Ransomware

Large corporations, governments, and even small businesses are always at a high risk of being attacked. Hackers and cybercriminals aim to steal your money no matter who or what you are.  Cyber-attacks have an increase of 235%, according to a report issued by Malware Labs in 2019.

Malware, Man-In-The-Middle, and phishing are some of the most common types of cyber attacks, but which could be more destructive? The answer is ransomware.

Ransomware is on the rise in the cyber world. It is a type of malware attack that will steal your data. Criminals will demand ransom to return your data.

In the U.S., Cleveland Hopkins International Airport and Baltimore City faced ransomware attacks. These drove smaller businesses, thinking that large corporations and governments are the only targets of a ransomware attack.

The truth, however, is that every business should prepare itself to prevent this attack. Every business needs to know how to prevent ransomware.

Difference between ransomware and other cyber attacks

ransomware

Ransomware doesn’t lack confidential information or personal data to be effective. This distinct behavior makes it different from other cyber attacks. It surfs for data in an organization that is valuable enough for the victim to pay the ransom just to get them back.

Ransomware is a very effective approach of hackers to paralyze an organization. It restricts their access to their information, deliver services, and accept payments. All these obstacles will turn customers of an organization away.

How to Prevent Ransomware

Ransomware can cost a lot to a business so it must prepare itself to combat not only ransomware but all kinds of cyber attacks.

Several best practices businesses should implement to protect themselves from cyber attacks include:

Security Software

Security software is crucial for an organization to detect and deter fraudulent activity. This software should be strong enough to verify activities and detect potential harm to the organization.

If your business is online, then your organization must use an identity verification service. It is an anti-fraud technology for online businesses that can verify your customer and even employees online. It is capable of detecting if someone is pretending to be someone.

Firewall & Intrusion Detection/Protection

Installing a firewall is a crucial security measure and any business should not neglect it. It can deny and allow access to a company network or a part of the network. By restricting access with the help of a firewall, an organization can prevent scammers.

Web & Email Filtering

cyber attack

All efforts of preventing ransomware from getting in the network become useless when an end-user inadvertently opens a malicious email and click a malicious link. For end-users, it is becoming increasingly complex to detect all malicious emails as phishing attempts are becoming more sophisticated.

By email safety awareness training and email filtering for employees, an organization can mitigate the risk. It’s one of the best ways on how to prevent ransomware.

User Education

Users/clients are the most valuable asset of a company. Their protection should be the priority of any organization. An organization should arrange regular awareness programs for users to educate them about malicious links and persons.

See Also: 5 Top Cyber Security Training Tips For Employees

Backup offers the best protection

Taking backup on regular basis can secure you from ransomware. In case you become a victim of ransomware, you can use your previous backup to keep going. Usually, taking backup costs you little to nothing but restoring backup could take days or weeks. It will cost you in labor and downtime to restore and recover your systems to make them fully functional.

Various types of backup, disaster, and threat prevention strategies are circulating in the market and every solution is valuable according to its use. You can protect your business by implementing a solution that mitigates risks attached to your business.

Your IT team or technical department should understand and analyze business deeply to provide and implement the best solution for unique problems. An IT team must identify which type of backups are beneficial for specific organizations and how to restore them more efficiently in less time in case of any critical situation.

The post The Most Devastating Cyber Attack: How to Prevent Ransomware appeared first on Dumb Little Man.

Apple disables Walkie Talkie app due to vulnerability that could allow iPhone eavesdropping

Apple has disabled the Apple Watch Walkie Talkie app due to an unspecified vulnerability that could allow a person to listen to another customer’s iPhone without consent, the company told TechCrunch this evening.

Apple has apologized for the bug and for the inconvenience of being unable to use the feature while a fix is made.

The Walkie Talkie app on Apple Watch allows two users who have accepted an invite from each other to receive audio chats via a ‘push to talk’ interface reminiscent of the PTT buttons on older cell phones.

A statement from Apple reads:

We were just made aware of a vulnerability related to the Walkie-Talkie app on the Apple Watch and have disabled the function as we quickly fix the issue. We apologize to our customers for the inconvenience and will restore the functionality as soon as possible. Although we are not aware of any use of the vulnerability against a customer and specific conditions and sequences of events are required to exploit it, we take the security and privacy of our customers extremely seriously. We concluded that disabling the app was the right course of action as this bug could allow someone to listen through another customer’s iPhone without consent.  We apologize again for this issue and the inconvenience.

Apple was alerted to the bug via its report a vulnerability portal directly and says that there is no current evidence that it was exploited in the wild.

The company is temporarily disabling the feature entirely until a fix can be made and rolled out to devices. The Walkie Talkie App will remain installed on devices, but will not function until it has been updated with the fix.

Earlier this year a bug was discovered in the group calling feature of FaceTime that allowed people to listen in before a call was accepted. It turned out that the teen who discovered the bug, Grant Thompson, had attempted to contact Apple about the issue but was unable to get a response. Apple fixed the bug and eventually rewarded Thompson a bug bounty.  This time around, Apple appears to be listening more closely to the reports that come in via its vulnerability tips line and has disabled the feature.

Earlier today, Apple quietly pushed a Mac update to remove a feature of the Zoom conference app that allowed it to work around Mac restrictions to provide a smoother call initiation experience — but that also allowed emails and websites to add a user to an active video call without their permission.

WeWork acquires Waltz, an app that lets users access different spaces with a single credential

WeWork announced today that it will acquire Waltz, a building access and security management startup, for an undisclosed amount. Waltz’s smartphone app and reader allows users to enter different properties with a single credential and will make it easier for WeWork’s enterprise clients, such as GE Healthcare and Microsoft, to manage their employees’ on-demand memberships to WeWork spaces.

WeWork’s announcement said “with deep expertise in mobile access and system integrations, Waltz has the most advanced and sophisticated products to provide that single credential to our members and to help us better connect them with our spaces.” Waltz was founded in 2015 by CEO Matt Kopel and has offices in New York and Montreal. After the acquisition, Waltz will be integrated into WeWork, but maintain its current customer base.

WeWork has been on an acquisition spree over the past year as it evolves from co-working spaces to a software-as-a-service provider. Companies it has bought include office management platforms Teem (for $100 million) and Managed by Q, as well as Euclid, a “spatial analytics platform” that allows companies to analyze the use of workspaces by their employees and participation at meetings and other events.

Likewise, Waltz isn’t just an alternative to keys or access cards. Its cloud-based management portal gives companies data about who enters and exits their buildings and also allows teams to set “Door Groups,” which restricts the use of some spaces to certain people. According to Waltz’s help site, it can also be used to make revenue through ads displayed in its app.

SentinelOne raises $120M for its fully-autonomous, AI-based endpoint security solution

Endpoint security — the branch of cybersecurity that focuses on data coming in from laptops, phones, and other devices connected to a network — is an $8 billion dollar market that, due to the onslaught of network breaches, is growing fast. To underscore that demand, one of the bigger startups in the space is announcing a sizeable funding round.

SentinelOne, which provides real-time endpoint protection on laptops, phones, containers, cloud services and most recently IoT devices on a network through a completely autonomous, AI-based platform, has raised $120 million in a Series D round — money that it will be using to continue expanding its current business as well as forge into new areas such as building more tools to automatically detect and patch software running on those endpoints, to keep them as secure as possible.

The funding was led by Insight Partners, with Samsung Venture Investment Corporation, NextEquity participating, alongside all of the company’s existing investors, which include the likes of Third Point Ventures, Redpoint Ventures, Data Collective, Sound Ventures and Ashton Kutcher, Tiger Global, Granite Hill and more.

SentinelOne is not disclosing its valuation with this round, but CEO and co-founder Tomer Weingarten confirmed it was up compared to its previous funding events. SentinelOne has now raised just shy of $130 million, and PitchBook notes that in its last round, it was valued at $210 post-money.

That would imply that this round values SentinelOne at more than $330 million, likely significantly more: “We are one of the youngest companies working in endpoint security, but we also have well over 2,000 customers and 300% growth year-on-year,” Weingarten said. And working in the area of software-as-a-service with a fully-automated solution that doesn’t require humans to run any aspect of it, he added, “means we have high margins.”

The rise in cyberattacks resulting from malicious hackers exploiting human errors — such as clicking on phishing links; or bringing in and using devices from outside the network running software that might not have its security patches up to date — has resulted in a stronger focus on endpoint security and the companies that provide it.

Indeed, SentinelOne is not alone. Crowdstrike, another large startup in the same space as SentinelOne, is now looking at a market cap of at least $4 billion when it goes public. Carbon Black, which went public last year, is valued at just above $1 billion. Another competitor, Cylance, was snapped up by BlackBerry for $1.5 billion.

Weingarten — who cofounded the company with Almog Cohen (CTO) and Ehud Shamir (CSO) — says that SentinelOne differs from its competitors in the field because of its focus on being fully autonomous.

“We’re able to digest massive amounts of data and run machine learning to detect any type of anomaly in an automated manner,” he said, describing Crowdstrike as “tech augmented by services.” That’s not to say SentinelOne is completely without human options (options being the key word; they’re not required): it offers its own managed services under the brand name of Vigilance and works with system integrator partners to sell its products to enterprises.

There is another recurring issue with endpoint security solutions, which is that they are known to throw up a lot of false positives — items that are not recognized by the system that subsequently get blocked, which turn out actually to be safe. Weingarten admits that this is a by-product of all these systems, including SentinelOne’s.

“It’s a result of opting to use a heuristic rather than deterministic model,” he said, “but there is no other way to deal with anomalies and unknowns without heuristics, but yes with that comes false positives.” He pointed out that the company’s focus on machine learning as the basis of its platform helps it to more comprehensively ferret these out and make deductions on what might not otherwise have proper representation in its models. Working for a pilot period at each client also helps inform the algorithms to become more accurate ahead of a full rollout.

All this has helped bring down SentinelOne’s own false positive rate, which Weingarten said is around 0.04%, putting it in the bracket of lower mis-detectors in this breakdown of false positive rates by VirusTotal:

“Endpoint security is at a fascinating point of maturity, highlighting a massive market opportunity for SentinelOne’s technology and team,” said Teddie Wardi, Managing Director, Insight Partners, in a statement. “Attack methods grow more advanced by the day and customers demand innovative, autonomous technology to stay one step ahead. We recognize SentinelOne’s strong leadership team and vision to be unique in the market, as evidenced through the company’s explosive growth and highly differentiated business model from its peer cybersecurity companies.”

By virtue of digesting activity across millions of endpoints and billions of events among its customers, SentinelOne has an interesting vantage point when it comes to seeing the biggest problems of the moment.

Weingarten notes that one big trend is that the biggest attacks are now not always coming from state-sponsored entities.

“Right now we’re seeing how fast advanced techniques are funnelling down from government-sponsored attackers to any cyber criminal. Sophisticated malicious hacking can now come from anywhere,” he said.

When it comes to figuring out what is most commonly creating vulnerabilities at an organization, he said it was the challenge of keeping up to date with security patches. Unsurprisingly, it’s something that SentinelOne plans to tackle with a new product later this year — one reason for the large funding round this time around.

“Seamless patching is absolutely something that we are looking at,” he said. “We already do vulnerability assessments today and so we have the data to tell you what is out of date. The next logical step is to seamlessly track those apps and issue the patches automatically.”

Indeed it’s this longer term vision of how the platform will be developing, and how it’s moving in response to what the current threats are today, that attracted the backers. (Indeed the IoT element of the “endpoint” focus is a recent additions.

“SentinelOne’s combination of best-in-class EPP and EDR functionality is a magnet for engagement, but it’s the company’s ability to foresee the future of the endpoint market that attracted us as a technology partner,” a rep from Samsung Venture Investment Corporation said in a statement. “Extending tech stacks beyond EPP and EDR to include IoT is the clear next step, and we look forward to collaborating with SentinelOne on its groundbreaking work in this area.

8 Ways You’re Actually Inviting Burglars Into Your Home

According to FBI crime statistics, there were an estimated 7,694,086 property crimes nationwide with losses of $15.3 billion in 2017.

Though you certainly don’t want your home to become the next target of potential thieves, sometimes you might be unwittingly inviting burglars into your home and putting your property (your family as well) at risk.

To avoid ending up in a low-hanging fruit in the eyes of intruders, make sure you’re away from these 8 home security mistakes:

Unlocked doors, windows, and other entrances

unlocked door

The shocking fact is that 32% of homeowners leave a window open and 13% leave a door unlocked. This offers a great opportunity for thieves to sneak into your home without alerting your neighbors.

So, take a few seconds before you leave home to double check all your doors, windows, and other entry points. And don’t forget about your storage shed, basement or garage as well!

No lights on at night

A dark home at night can be a clear sign that your house is vacant. Instead of turning all your lights on when you’re away from home (smart burglars will easily see through this trick), it’s better to install timers on interior lamps. That way, you can create an appearance that the house is occupied.

Uncollected mails, newspapers, and packages

If you plan to go away for a vacation or on a business trip, ask a reliable neighbor, friend or family member to pick up your mails, newspapers, and packages in advance. You may request the post office to hold your mails and ask the newsagent to stop delivering your papers until you come back home.

Leaving ladders and tools out

Leaving a ladder, hammer, saw and other tools in open areas is practically inviting trouble for yourself. Once these fall into the hands of burglars, the next thing you can expect, without any doubt, would be forced entry into your home.

Place your tools in your garage or basement after use. Also, make sure that your basement and garage are well locked.

Untrimmed bushes and landscape

Overgrown bushes not only provide ideal shelter for burglars to hide when casing your house but also indicate that you have been away for a long time. It might lure burglars into your home.

Trim the bushes and mow your lawn regularly to make sure no one can hide in it. If you’re going away for a long period of time, hire someone to attend to the landscape during your absence.

Displaying valuable items in plain view

Are you leaving your garden furniture and lawn decorations in plain sight? Or do you just throw away the box of your brand-new TV or computer on the curb? Watch out!

Thieves select homes to break into by taking note of boxes curbed as trash, especially during holiday seasons. A safer way to dispose of the trashes for valuables is to cut them up and toss them in the trash can.

Leaving spare keys under carpet/stones

You might think it is a great idea to hide your spare keys under the carpet or stones but never underestimate the burglars. They’re good at hide-and-seek games.

Doormats, flowerpots, mailboxes, and stones are normally the first places smart thieves would search for. If you’re afraid that you might be locked out, give a set of keys to a trustworthy family member or your friend.

Showing off on social media

It is understandable that you love to share a memorable experience during a trip on social media. But take heed, posting your vacation details on Facebook, Twitter, and Instagram is basically announcing to the burglars that your home is unoccupied and free to break into.

So instead of posting your real-time vacation moments, wait until you come back home to share the photos online.

See Also: Home Security: Try These 10 Ways to Make Your Home Safer – Without a Gun

The post 8 Ways You’re Actually Inviting Burglars Into Your Home appeared first on Dumb Little Man.

Internet Access While Traveling: Tips for Keeping Your Data Safe

Are you tired of unreliable and painfully slow internet when traveling? Do you worry about your security?

It does feel like an endless battle waiting for apps to respond and pages to load. Imagine spending your precious traveling time looking at a blank screen instead of enjoying the beautiful environment.

To ensure you enjoy your internet access without worrying about your security while traveling, we have put together some really helpful tips.

Ensure you stick to secure sites

First, check the website if it’s secure. You can check if a site is secure by going through the security information using a trusted browser like Firefox or Chrome. You’ll know a site’s connection is safe when there’s a green lock on the left side of the URL.

It is important that you try avoiding entering sensitive information on non-secured sites, especially when using a public network. This includes entering credit card numbers, passwords, and other personal details.

See Also: 8 Easy Steps To Your Browser Security And Privacy

Avoid using apps

internet connection while tracvelling

App security is less stringent when compared to browser security. In case you are using apps from popular brands like Paypal, you will definitely be okay.

However, it is important that you try avoiding entering sensitive information into apps from not so popular companies. This is crucial, especially when using a similar password for various websites.

Switch off file sharing

Ensure that your files are secure.

In most cases, when you are using your laptop on the home network, you normally share folders with your parents, siblings or friends. This is okay as long as you remember to turn it off when connecting to a public Wi-Fi.

If you forget to turn off file sharing, every person who connects to the same Wi-Fi can view your files.

The most recent computers are smart and capable of automatically turning off file sharing when you connect to public Wi-Fi. However, it is advisable that you always double check.

Update your anti-virus

It is important that you never connect your devices to any free Wi-Fi network without an updated antivirus. Most smartphones and laptops these days come with built-in software like the Windows Defender. However, you are still advised to step up and download software like Avast, which is capable of giving you an extra layer of protection.

Use VPN

traveling with internet access

You can consider using a VPN, which will act as your private internet bodyguard. VPN will hide your IP address and encrypt your connection to ensure everything you send over the internet is hidden.

VPN is cheap and accessible. You should never have an excuse for failing to use one.

See Also: How to Set Up a VPN

Conclusion

The above measures will not make you bulletproof. However, they will help in reducing your chance of being targeted and improve your internet access while traveling.

The post Internet Access While Traveling: Tips for Keeping Your Data Safe appeared first on Dumb Little Man.

Sprint customers say a glitch exposed other people’s account information

Several Sprint customers have said they are seeing other customers’ personal information in their online accounts.

One reader emailed TechCrunch with several screenshots describing the issue, warning that they could see other Sprint customers’ names and phone numbers. The reader said they informed the phone giant of the issue, and a Sprint representative said they had “several calls pertaining to the same issue.”

In all, the reader saw 22 numbers in a two-hour period, they said.

Several other customers complained of the same data exposing bug. It’s unclear how widespread the issue is or for how long the account information leak persisted.

Logged in to pay my @sprint bill, saw what looked like the details of another user. Did this 3 times. I called, rep said they’d been getting other similar calls. Advice on clarifying if this is the privacy breach it looks like? @EFF @publiccitizen @NCLC4consumers @eyywa

— Kylie B-C (@notthatkylie) March 14, 2019

@sprint are you having a known issue with your website?! I’m trying to set permissions on my account and some other damil’s information is on my account!

— Thelma Cheeks (@Tcheeksiamhair) March 19, 2019

If you are a @sprint customer please be aware that there has been a data breach. I have logged on to my account twice and both times have seen other customers’ devices. A phone call with @sprintcare resulted in them hanging up on me.

— Madeline Finch (@themadfinch) March 19, 2019

Another customer told TechCrunch how the Sprint account pages were initially throwing errors. The customer said they scrolled down their account page and saw several numbers that were not theirs. “I was able to click each one individually and see every phone call they made, the text messages they used, and the standard info, including caller ID name they have set,” the customer told TechCrunch.

Of the customers we’ve spoken to, some are pre-paid and others are contract.

We’ve reached out to Sprint for more but did not hear back. We’ll update when more comes in.

Facebook won’t let you opt-out of its phone number ‘look up’ setting

Users are complaining that the phone number Facebook hassled them to use to secure their account with two-factor authentication has also been associated with their user profile — which anyone can use to “look up” their profile.

Worse, Facebook doesn’t give you an option to opt-out.

Last year, Facebook was forced to admit that after months of pestering its users to switch on two-factor by signing up their phone number, it was also using those phone numbers to target users with ads. But some users are finding out just now that Facebook’s default setting allows everyone — with or without an account — to look up a user profile based off the same phone number previously added to their account.

The recent hubbub began today after a tweet by Jeremy Burge blew up, criticizing Facebook’s collection and use of phone numbers, which he likened to “a unique ID that is used to link your identity across every platform on the internet.”

For years Facebook claimed the adding a phone number for 2FA was only for security. Now it can be searched and there’s no way to disable that. pic.twitter.com/zpYhuwADMS

— Jeremy Burge 🐥🧿 (@jeremyburge) March 1, 2019

Although users can hide their phone number on their profile so nobody can see it, it’s still possible to “look up” user profiles in other ways, such as “when someone uploads your contact info to Facebook from their mobile phone,” according to a Facebook help article. It’s a more restricted way than allowing users to search for user profiles using a person’s phone number, which Facebook restricted last year after admitting “most” users had their information scraped.

Facebook gives users the option of allowing users to “look up” their profile using their phone number to “everyone” by default, or to “friends of friends” or just the user’s “friends.”

But there’s no way to hide it completely.

Security expert and academic Zeynep Tufekci said in a tweet: “Using security to further weaken privacy is a lousy move — especially since phone numbers can be hijacked to weaken security,” referring to SIM swapping, where scammers impersonate cell customers to steal phone numbers and break into other accounts.

See thread! Using security to further weaken privacy is a lousy move—especially since phone numbers can be hijacked to weaken security. Putting people at risk. What say you @facebook? https://t.co/9qKtTodkRD

— zeynep tufekci (@zeynep) March 2, 2019

Tufekci’s argued that users can “no longer keep keep private the phone number that [they] provided only for security to Facebook.”

Facebook spokesperson Jay Nancarrow told TechCrunch that the settings “are not new,” adding that, “the setting applies to any phone numbers you added to your profile and isn’t specific to any feature.”

Gizmodo reported last year that when a user gives Facebook a phone number for two-factor, it “became targetable by an advertiser within a couple of weeks.”

If a user doesn’t like it, they can set up two-factor without using a phone number — which hasn’t been mandatory for additional login security since May 2018.

But even if users haven’t set up two-factor, there are well documented cases of users having their phone numbers collected by Facebook, whether the user expressly permitted it or not.

In 2017, one reporter for The Telegraph described her alarm at the “look up” feature, given she had “not given Facebook my number, was unaware that it had found it from other sources, and did not know it could be used to look me up.”

WhatsApp, the messaging app also owned by Facebook (alongside Messenger and Instagram), uses your phone number as the primary way to create your account and connect you to its service. Facebook has long had a strategy to further integrate the two services, although it has run into some bumps along the way.

To the specific concerns by users, Facebook said: “We appreciate the feedback we’ve received about these settings and will take it into account.”

Concerned users should switch their “look up” settings to “Friends” to mitigate as much of the privacy risk as possible.

When asked specifically if Facebook will allow users to users to opt-out of the setting, Facebook said it won’t comment on future plans. And, asked why it was set to “everyone” by default, Facebook said the feature makes it easier to find people you know but aren’t yet friends with.

Others criticized Facebook’s move to expose phone numbers to “look ups,” calling it “unconscionable.”

Alex Stamos, former chief security officer and now adjunct professor at Stanford University, also called out the practice in a tweet. “Facebook can’t credibly require two-factor for high-risk accounts without segmenting that from search and ads,” he said.

Since Stamos left Facebook in August, Facebook has not hired a replacement chief security officer.

The case against behavioral advertising is stacking up

No one likes being stalked around the Internet by adverts. It’s the uneasy joke you can’t enjoy laughing at. Yet vast people-profiling ad businesses have made pots of money off of an unregulated Internet by putting surveillance at their core.

But what if creepy ads don’t work as claimed? What if all the filthy lucre that’s currently being sunk into the coffers of ad tech giants — and far less visible but no less privacy-trampling data brokers — is literally being sunk, and could both be more honestly and far better spent?

Case in point: This week Digiday reported that the New York Times managed to grow its ad revenue after it cut off ad exchanges in Europe. The newspaper did this in order to comply with the region’s updated privacy framework, GDPR, which includes a regime of supersized maximum fines.

The newspaper business decided it simply didn’t want to take the risk, so first blocked all open-exchange ad buying on its European pages and then nixed behavioral targeting. The result? A significant uptick in ad revenue, according to Digiday’s report.

“NYT International focused on contextual and geographical targeting for programmatic guaranteed and private marketplace deals and has not seen ad revenues drop as a result, according to Jean-Christophe Demarta, SVP for global advertising at New York Times International,” it writes.

“Currently, all the ads running on European pages are direct-sold. Although the publisher doesn’t break out exact revenues for Europe, Demarta said that digital advertising revenue has increased significantly since last May and that has continued into early 2019.”

It also quotes Demarta summing up the learnings: “The desirability of a brand may be stronger than the targeting capabilities. We have not been impacted from a revenue standpoint, and, on the contrary, our digital advertising business continues to grow nicely.”

So while (of course) not every publisher is the NYT, publishers that have or can build brand cachet, and pull in a community of engaged readers, must and should pause for thought — and ask who is the real winner from the notion that digitally served ads must creep on consumers to work?

The NYT’s experience puts fresh taint on long-running efforts by tech giants like Facebook to press publishers to give up more control and ownership of their audiences by serving and even producing content directly for the third party platforms. (Pivot to video anyone?)

Such efforts benefit platforms because they get to make media businesses dance to their tune. But the self-serving nature of pulling publishers away from their own distribution channels (and content convictions) looks to have an even more bass string to its bow — as a cynical means of weakening the link between publishers and their audiences, thereby risking making them falsely reliant on adtech intermediaries squatting in the middle of the value chain.

There are other signs behavioural advertising might be a gigantically self-serving con too.

Look at non-tracking search engine DuckDuckGo, for instance, which has been making a profit by serving keyword-based ads and not profiling users since 2014, all the while continuing to grow usage — and doing so in a market that’s dominated by search giant Google.

DDG recently took in $10M in VC funding from a pension fund that believes there’s an inflection point in the online privacy story. These investors are also displaying strong conviction in the soundness of the underlying (non-creepy) ad business, again despite the overbearing presence of Google.

Meanwhile, Internet users continue to express widespread fear and loathing of the ad tech industry’s bandwidth- and data-sucking practices by running into the arms of ad blockers. Figures for usage of ad blocking tools step up each year, with between a quarter and a third of U.S. connected device users’ estimated to be blocking ads as of 2018 (rates are higher among younger users).

Ad blocking firm Eyeo, maker of the popular AdBlock Plus product, has achieved such a position of leverage that it gets Google et al to pay it to have their ads whitelisted by default — under its self-styled ‘acceptable ads’ program. (Though no one will say how much they’re paying to circumvent default ad blocks.)

So the creepy ad tech industry is not above paying other third parties for continued — and, at this point, doubly grubby (given the ad blocking context) — access to eyeballs. Does that sound even slightly like a functional market?

In recent years expressions of disgust and displeasure have also been coming from the ad spending side too — triggered by brand-denting scandals attached to the hateful stuff algorithms have been serving shiny marketing messages alongside. You don’t even have to be worried about what this stuff might be doing to democracy to be a concerned advertiser.

Fast moving consumer goods giants Unilever and Procter & Gamble are two big spenders which have expressed concerns. The former threatened to pull ad spend if social network giants didn’t clean up their act and prevent their platforms algorithmically accelerating hateful and divisive content.

While the latter has been actively reevaluating its marketing spending — taking a closer look at what digital actually does for it. And last March Adweek reported it had slashed $200M from its digital ad budget yet had seen a boost in its reach of 10 per cent, reinvesting the money into areas with “‘media reach’ including television, audio and ecommerce”.

The company’s CMO, Marc Pritchard, declined to name which companies it had pulled ads from but in a speech at an industry conference he said it had reduced spending “with several big players” by 20 per cent to 50 per cent, and still its ad business grew.

So chalk up another tale of reduced reliance on targeted ads yielding unexpected business uplift.

At the same time, academics are digging into the opaquely shrouded question of who really benefits from behavioral advertising. And perhaps getting closer to an answer.

Last fall, at an FTC hearing on the economics of big data and personal information, Carnegie Mellon University professor of IT and public policy, Alessandro Acquisti, teased a piece of yet to be published research — working with a large U.S. publisher that provided the researchers with millions of transactions to study.

Acquisti said the research showed that behaviourally targeted advertising had increased the publisher’s revenue but only marginally. At the same time they found that marketers were having to pay orders of magnitude more to buy these targeted ads, despite the minuscule additional revenue they generated for the publisher.

“What we found was that, yes, advertising with cookies — so targeted advertising — did increase revenues — but by a tiny amount. Four per cent. In absolute terms the increase in revenues was $0.000008 per advertisment,” Acquisti told the hearing. “Simultaneously we were running a study, as merchants, buying ads with a different degree of targeting. And we found that for the merchants sometimes buying targeted ads over untargeted ads can be 500% times as expensive.”

“How is it possible that for merchants the cost of targeting ads is so much higher whereas for publishers the return on increased revenues for targeted ads is just 4%,” he wondered, posing a question that publishers should really be asking themselves — given, in this example, they’re the ones doing the dirty work of snooping on (and selling out) their readers.

Acquisti also made the point that a lack of data protection creates economic winners and losers, arguing this is unavoidable — and thus qualifying the oft-parroted tech industry lobby line that privacy regulation is a bad idea because it would benefit an already dominant group of players. The rebuttal is that a lack of privacy rules also does that. And that’s exactly where we are now.

“There is a sort of magical thinking happening when it comes to targeted advertising [that claims] everyone benefits from this,” Acquisti continued. “Now at first glance this seems plausible. The problem is that upon further inspection you find there is very little empirical validation of these claims… What I’m saying is that we actually don’t know very well to which these claims are true and false. And this is a pretty big problem because so many of these claims are accepted uncritically.”

There’s clearly far more research that needs to be done to robustly interrogate the effectiveness of targeted ads against platform claims and vs more vanilla types of advertising (i.e. which don’t demand reams of personal data to function). But the fact that robust research hasn’t been done is itself interesting.

Acquisti noted the difficulty of researching “opaque blackbox” ad exchanges that aren’t at all incentivized to be transparent about what’s going on. Also pointing out that Facebook has sometimes admitted to having made mistakes that significantly inflated its ad engagement metrics.

His wider point is that much current research into the effectiveness of digital ads is problematically narrow and so is exactly missing a broader picture of how consumers might engage with alternative types of less privacy-hostile marketing.

In a nutshell, then, the problem is the lack of transparency from ad platforms; and that lack serving the self same opaque giants.

But there’s more. Critics of the current system point out it relies on mass scale exploitation of personal data to function, and many believe this simply won’t fly under Europe’s tough new GDPR framework.

They are applying legal pressure via a set of GDPR complaints, filed last fall, that challenge the legality of a fundamental piece of the (current) adtech industry’s architecture: Real-time bidding (RTB); arguing the system is fundamentally incompatible with Europe’s privacy rules.

We covered these complaints last November but the basic argument is that bid requests essentially constitute systematic data breaches because personal data is broadcast widely to solicit potential ad buys and thereby poses an unacceptable security risk — rather than, as GDPR demands, people’s data being handled in a way that “ensures appropriate security”.

To spell it out, the contention is the entire behavioral advertising business is illegal because it’s leaking personal data at such vast and systematic scale it cannot possibly comply with EU data protection law.

Regulators are considering the argument, and courts may follow. But it’s clear adtech systems that have operated in opaque darkness for years, without no worry of major compliance fines, no longer have the luxury of being able to take their architecture as a given.

Greater legal risk might be catalyst enough to encourage a market shift towards less intrusive targeting; ads that aren’t targeted based on profiles of people synthesized from heaps of personal data but, much like DuckDuckGo’s contextual ads, are only linked to a real-time interest and a generic location. No creepy personal dossiers necessary.

If Acquisti’s research is to be believed — and here’s the kicker for Facebook et al — there’s little reason to think such ads would be substantially less effective than the vampiric microtargeted variant that Facebook founder Mark Zuckerberg likes to describe as “relevant”.

The ‘relevant ads’ badge is of course a self-serving concept which Facebook uses to justify creeping on users while also pushing the notion that its people-tracking business inherently generates major extra value for advertisers. But does it really do that? Or are advertisers buying into another puffed up fake?

Facebook isn’t providing access to internal data that could be used to quantify whether its targeted ads are really worth all the extra conjoined cost and risk. While the company’s habit of buying masses of additional data on users, via brokers and other third party sources, makes for a rather strange qualification. Suggesting things aren’t quite what you might imagine behind Zuckerberg’s drawn curtain.

Behavioral ad giants are facing growing legal risk on another front. The adtech market has long been referred to as a duopoly, on account of the proportion of digital ad spending that gets sucked up by just two people-profiling giants: Google and Facebook (the pair accounted for 58% of the market in 2018, according to eMarketer data) — and in Europe a number of competition regulators have been probing the duopoly.

Earlier this month the German Federal Cartel Office was reported to be on the brink of partially banning Facebook from harvesting personal data from third party providers (including but not limited to some other social services it owns). Though an official decision has yet to be handed down.

While, in March 2018, the French Competition Authority published a meaty opinion raising multiple concerns about the online advertising sector — and calling for an overhaul and a rebalancing of transparency obligations to address publisher concerns that dominant platforms aren’t providing access to data about their own content.

The EC’s competition commissioner, Margrethe Vestager, is also taking a closer look at whether data hoarding constitutes a monopoly. And has expressed a view that, rather than breaking companies up in order to control platform monopolies, the better way to go about it in the modern ICT era might be by limiting access to data — suggesting another potentially looming legal headwind for personal data-sucking platforms.

At the same time, the political risks of social surveillance architectures have become all too clear.

Whether microtargeted political propaganda works as intended or not is still a question mark. But few would support letting attempts to fiddle elections just go ahead and happen anyway.

Yet Facebook has rushed to normalize what are abnormally hostile uses of its tools; aka the weaponizing of disinformation to further divisive political ends — presenting ‘election security’ as just another day-to-day cost of being in the people farming business. When the ‘cost’ for democracies and societies is anything but normal. 

Whether or not voters can be manipulated en masse via the medium of targeted ads, the act of targeting itself certainly has an impact — by fragmenting the shared public sphere which civilized societies rely on to drive consensus and compromise. Ergo, unregulated social media is inevitably an agent of antisocial change.

The solution to technology threatening democracy is far more transparency; so regulating platforms to understand how, why and where data is flowing, and thus get a proper handle on impacts in order to shape desired outcomes.

Greater transparency also offers a route to begin to address commercial concerns about how the modern adtech market functions.

And if and when ad giants are forced to come clean — about how they profile people; where data and value flows; and what their ads actually deliver — you have to wonder what if anything will be left unblemished.

People who know they’re being watched alter their behavior. Similarly, platforms may find behavioral change enforced upon them, from above and below, when it becomes impossible for everyone else to ignore what they’re doing.

The social layer is ironically key to Bitcoin’s security

A funny thing happened in the second half of 2018. At some moment, all the people active in crypto looked around and realized there weren’t very many of us. The friends we’d convinced during the last holiday season were no longer speaking to us. They had stopped checking their Coinbase accounts. The tide had gone out from the beach. Tokens and blockchains were supposed to change the world; how come nobody was using them?

In most cases, still, nobody is using them. In this respect, many crypto projects have succeeded admirably. Cryptocurrency’s appeal is understood by many as freedom from human fallibility. There is no central banker, playing politics with the money supply. There is no lawyer, overseeing the contract. Sometimes it feels like crypto developers adopted the defense mechanism of the skunk. It’s working: they are succeeding at keeping people away.

Some now acknowledge the need for human users, the so-called “social layer,” of Bitcoin and other crypto networks. That human component is still regarded as its weakest link. I’m writing to propose that crypto’s human component is its strongest link. For the builders of crypto networks, how to attract the right users is a question that should come before how to defend against attackers (aka, the wrong users). Contrary to what you might hear on Twitter, when evaluating a crypto network, the demographics and ideologies of its users do matter. They are the ultimate line of defense, and the ultimate decision-maker on direction and narrative.

What Ethereum got right

Since the collapse of The DAO, no one in crypto should be allowed to say “code is law” with a straight face. The DAO was a decentralized venture fund that boldly claimed pure governance through code, then imploded when someone found a loophole. Ethereum, a crypto protocol on which The DAO was built, erased this fiasco with a hard fork, walking back the ledger of transactions to the moment before disaster struck. Dissenters from this social-layer intervention kept going on Ethereum’s original, unforked protocol, calling it Ethereum Classic. To so-called “Bitcoin maximalists,” the DAO fork is emblematic of Ethereum’s trust-dependency, and therefore its weakness.

There’s irony, then, in maximalists’ current enthusiasm for narratives describing Bitcoin’s social-layer resiliency. The story goes: in the event of a security failure, Bitcoin’s community of developers, investors, miners and users are an ultimate layer of defense. We, Bitcoin’s community, have the option to fork the protocol—to port our investment of time, capital and computing power onto a new version of Bitcoin. It’s our collective commitment to a trust-minimized monetary system that makes Bitcoin strong. (Disclosure: I hold bitcoin and ether.)

Even this narrative implies trust—in the people who make up that crowd. Historically, Bitcoin Core developers, who maintain the Bitcoin network’s dominant client software, have also exerted influence, shaping Bitcoin’s road map and the story of its use cases. Ethereum’s flavor of minimal trust is different, having a public-facing leadership group whose word is widely imbibed. In either model, the social layer abides. When they forked away The DAO, Ethereum’s leaders had to convince a community to come along.

You can’t believe in the wisdom of the crowd and discount its ability to see through an illegitimate power grab, orchestrated from the outside. When people criticize Ethereum or Bitcoin, they are really criticizing this crowd, accusing it of a propensity to fall for false narratives.

How do you protect Bitcoin’s codebase?

In September, Bitcoin Core developers patched and disclosed a vulnerability that would have enabled an attacker to crash the Bitcoin network. That vulnerability originated in March, 2017, with Bitcoin Core 0.14. It sat there for 18 months until it was discovered.

There’s no doubt Bitcoin Core attracts some of the best and brightest developers in the world, but they are fallible and, importantly, some of them are pseudonymous. Could a state actor, working pseudonymously, produce code good enough to be accepted into Bitcoin’s protocol? Could he or she slip in another vulnerability, undetected, for later exploitation? The answer is undoubtedly yes, it is possible, and it would be naïve to believe otherwise. (I doubt Bitcoin Core developers themselves are so naïve.)

Why is it that no government has yet attempted to take down Bitcoin by exploiting such a weakness? Could it be that governments and other powerful potential attackers are, if not friendly, at least tolerant towards Bitcoin’s continued growth? There’s a strong narrative in Bitcoin culture of crypto persisting against hostility. Is that narrative even real?

The social layer is key to crypto success

Some argue that sexism and racism don’t matter to Bitcoin. They do. Bitcoin’s hodlers should think carefully about the books we recommend and the words we write and speak. If your social layer is full of assholes, your network is vulnerable. Not all hacks are technical. Societies can be hacked, too, with bad or unsecure ideas. (There are more and more numerous examples of this, outside of crypto.)

Not all white papers are as elegant as Satoshi Nakamoto’s Bitcoin white paper. Many run over 50 pages, dedicating lengthy sections to imagining various potential attacks and how the network’s internal “crypto-economic” system of incentives and penalties would render them bootless. They remind me of the vast digital fortresses my eight-year-old son constructs in Minecraft, bristling with trap doors and turrets.

I love my son (and his Minecraft creations), but the question both he and crypto developers may be forgetting to ask is, why would anyone want to enter this forbidding fortress—let alone attack it? Who will enter, bearing talents, ETH or gold? Focusing on the user isn’t yak shaving, when the user is the ultimate security defense. I’m not suggesting security should be an afterthought, but perhaps a network should be built to bring people in, rather than shut them out.

The author thanks Tadge Dryja and Emin Gün Sirer, who provided feedback that helped hone some of the ideas in this article.

Yahoo agrees $50M settlement package for users hit by massive security breach

One of the largest consumer internet hacks has bred one of the largest class action settlements after Yahoo agreed to pay $50 million to victims of a security breach that’s said to have affected up to 200 million U.S. consumers and some three billion email accounts worldwide.

In what appears to be the closing move to the two-year-old lawsuit, Yahoo — which is now part of Verizon’s Oath business [which is the parent company of TechCrunch] — has proposed to pay $50 million in compensation to an estimated 200 million users in the U.S. and Israel, according to a court filing.

In addition, the company will cover up to $35 million on lawyer fees related to the case and provide affected users in the U.S. with credit monitoring services for two years via AllClear, a package that would retail for around $350. There are also compensation options for small business and individuals to claim back costs for losses associated with the hacks. That could include identity theft, delayed tax refunds and any other issues related to data lost at the hands of the breaches. Finally, those who paid for premium Yahoo email services are eligible for a 25 percent refund.

The deal is subject to final approval from U.S. District Judge Lucy Koh of the Northern District of California at a hearing slated for November 29.

Since Yahoo is now part of Oath, the costs will be split 50-50 between Oath and Altaba, the holding company that owns what is left of Yahoo following the acquisition. Altaba last month revealed it had agreed to pay $47 million to settle three legal cases related to the landmark security breach.

Yahoo estimates that three billion accounts were impacted by a series of breaches that began in 2013. The intrusion is believed to have been state-sponsored attack by Russia, although no strong evidence has been provided to support that claim.

The incident wasn’t reported publicly until 2016, just months after Verizon announced that it would acquire Yahoo’s core business in a $4.8 billion deal.

At the time, Yahoo estimated that the incident had affected “at least” 500 million users but it later emerged that data on all of Yahoo’s three billion users had been swiped. A second attack a year later stole information that included email and passwords belonging to 500 million Yahoo account holders. Unsurprisingly, the huge attacks saw Verizon negotiate a $350 million discount on the deal.

AdGuard resets all user passwords after account hacks

Popular ad-blocker AdGuard has forcibly reset all of its users’ passwords after it detected hackers trying to break into accounts.

The company said it “detected continuous attempts to login to AdGuard accounts from suspicious IP addresses which belong to various servers across the globe,” in what appeared to be a credential stuffing attack. That’s when hackers take lists of stolen usernames and passwords and try them on other sites.

AdGuard said that the hacking attempts were slowed thanks to rate limiting — preventing the attackers from trying too many passwords in one go. But, the effort was “not enough” when the attackers know the passwords, a blog post said.

“As a precautionary measure, we have reset passwords to all AdGuard accounts,” said Andrey Meshkov, AdGuard’s co-founder and chief technology officer.

AdGuard has more than five million users worldwide, and is one of the most prominent ad-blockers available.

Although the company said that some accounts were improperly accessed, there wasn’t a direct breach of its systems. It’s not known how many accounts were affected. An email to Meshkov went unreturned at the time of writing.

It’s not clear why attackers targeted AdGuard users, but the company’s response was swift and effective.

The company said it now has set stricter password requirements, and connects to Have I Been Pwned, a breach notification database set up by security expert Troy Hunt, to warn users away from previously breached passwords. Hunt’s database is trusted by both the UK and Australian governments, and integrates with several other password managers and identity solutions.

AdGuard also said that it will implement two-factor authentication — a far stronger protection against credential stuffing attacks — but that it’s a “next step” as it “physically can’t implement it in one day.”

North Korea skirts US sanctions by secretly selling software around the globe

Fake social media profiles are useful for more than just sowing political discord among foreign adversaries, as it turns out. A group linked to the North Korean government has been able to duck existing sanctions on the country by concealing its true identity and developing software for clients abroad.

This week, the US Treasury issued sanctions against two tech companies accused of running cash-generating front operations for North Korea: Yanbian Silverstar Network Technology or “China Silver Star,” based near Shenyang, China, and a Russian sister company called Volasys Silver Star. The Treasury also sanctioned China Silver Star’s North Korean CEO Jong Song Hwa.

“These actions are intended to stop the flow of illicit revenue to North Korea from overseas information technology workers disguising their true identities and hiding behind front companies, aliases, and third-party nationals,” Treasury Secretary Steven Mnuchin said of the sanctions.

As the Wall Street Journal reported in a follow-up story, North Korean operatives advertised with Facebook and LinkedIn profiles, solicited business with Freelance.com and Upwork, crafted software using Github, communicated over Slack and accepted compensation with Paypal. The country appears to be encountering little resistance putting tech platforms built by US companies to work building software including “mobile games, apps, [and] bots” for unwitting clients abroad.

The US Treasury issued its first warnings of secret North Korean software development scheme in July, though did not provide many details at the time. The Wall Street Journal was able to identify “tens of thousands” of dollars stemming from the Chinese front company, though that’s only a representative sample. The company worked as a middleman, contracting its work out to software developers around the globe and then denying payment for their services.

Facebook suspended many suspicious accounts linked to the scheme after they were identified by the Wall Street Journal, including one for “Everyday-Dude.com”:

“A Facebook page for Everyday-Dude.com, showing packages with hundreds of programs, was taken down minutes later as a reporter was viewing it. Pages of some of the account’s more than 1,000 Facebook friends also subsequently disappeared…

“[Facebook] suspended numerous North Korea-linked accounts identified by the Journal, including one that Facebook said appeared not to belong to a real person. After it closed that account, another profile, with identical friends and photos, soon popped up.”

Linkedin and Upwork similarly removed accounts linked to the North Korean operations.

Beyond the consequences for international relations, software surreptitiously sold by the North Korean government poses considerable security risks. According to the Treasury, the North Korean government makes money off of a “range of IT services and products abroad” including “website and app development, security software, and biometric identification software that have military and law enforcement applications.” For companies unwittingly buying North Korea-made software, the potential for malware that could give the isolated nation eyes and ears beyond its borders is high, particularly given that the country has already demonstrated its offensive cyber capabilities.

Between that and sanctions against doing business with the country, Mnuchin urges the information technology industry and other businesses to exercise awareness of the ongoing scheme to avoid accidentally contracting with North Korea on tech-related projects.

Security flaw in ‘nearly all’ modern PCs and Macs exposes encrypted data

Most modern computers, even devices with disk encryption, are vulnerable to a new attack that can steal sensitive data in a matter of minutes, new research says.

In new findings published Wednesday, F-Secure said that none of the existing firmware security measures in every laptop it tested “does a good enough job” of preventing data theft.

F-Secure principal security consultant Olle Segerdahl told TechCrunch that the vulnerabilities put “nearly all” laptops and desktops — both Windows and Mac users — at risk.

The new exploit is built on the foundations of a traditional cold boot attack, which hackers have long used to steal data from a shut-down computer. Modern computers overwrite their memory when a device is powered down to scramble the data from being read. But Segerdahl and his colleague Pasi Saarinen found a way to disable the overwriting process, making a cold boot attack possible again.

“It takes some extra steps,” said Segerdahl, but the flaw is “easy to exploit.” So much so, he said, that it would “very much surprise” him if this technique isn’t already known by some hacker groups.

“We are convinced that anybody tasked with stealing data off laptops would have already come to the same conclusions as us,” he said.

It’s no secret that if you have physical access to a computer, the chances of someone stealing your data is usually greater. That’s why so many use disk encryption — like BitLocker for Windows and FileVault for Macs — to scramble and protect data when a device is turned off.

But the researchers found that in nearly all cases they can still steal data protected by BitLocker and FileVault regardless.

After the researchers figured out how the memory overwriting process works, they said it took just a few hours to build a proof-of-concept tool that prevented the firmware from clearing secrets from memory. From there, the researchers scanned for disk encryption keys, which, when obtained, could be used to mount the protected volume.

It’s not just disk encryption keys at risk, Segerdahl said. A successful attacker can steal “anything that happens to be in memory,” like passwords and corporate network credentials, which can lead to a deeper compromise.

Their findings were shared with Microsoft, Apple, and Intel prior to release. According to the researchers, only a smattering of devices aren’t affected by the attack. Microsoft said in a recently updated article on BitLocker countermeasures that using a startup PIN can mitigate cold boot attacks, but Windows users with “Home” licenses are out of luck. And, any Apple Mac equipped with a T2 chip are not affected, but a firmware password would still improve protection.

Both Microsoft and Apple downplayed the risk.

Acknowledging that an attacker needs physical access to a device, Microsoft said it encourages customers to “practice good security habits, including preventing unauthorized physical access to their device.” Apple said it was looking into measures to protect Macs that don’t come with the T2 chip.

When reached, Intel would not to comment on the record.

In any case, the researchers say, there’s not much hope that affected computer makers can fix their fleet of existing devices.

“Unfortunately, there is nothing Microsoft can do, since we are using flaws in PC hardware vendors’ firmware,” said Segerdahl. “Intel can only do so much, their position in the ecosystem is providing a reference platform for the vendors to extend and build their new models on.”

Companies, and users, are “on their own,” said Segerdahl.

“Planning for these events is a better practice than assuming devices cannot be physically compromised by hackers because that’s obviously not the case,” he said.

Epic Games just gave a perk for folks to turn on 2FA; every other big company should, too

Let’s talk a bit about security.

Most internet users around the world are pretty crap at it, but there are basic tools that companies have, and users can enable, to make their accounts, and lives, a little bit more hacker-proof.

One of these — two-factor authentication — just got a big boost from Epic Games, the maker of what is currently The Most Popular Game In The World: Fortnite.

Epic is already getting a ton of great press for what amounts to very little effort.

Son: Do you know what two-factor authentication is?
Me: Uh, yeah?
Son: I get a free dance on @Fortnitegame if I enable two factor. Can we do that?

Incentives matter.

— Dennis (@DennisF) August 23, 2018

The company is giving users a new emote (the victory dance you’ve seen emulated in airports, playgrounds and parks by kids and tweens around the world) to anyone who turns on two-factor authentication. It’s one small (dance) step for Epic, but one giant leap for securing their users’ accounts.

The thing is any big company could do this (looking at you Microsoft, Apple, Alphabet and any other company with a huge user base).

Apparently the perk of not getting hacked isn’t enough for most users, but if you give anyone the equivalent of a free dance, they’ll likely flock to turn on the feature.

It’s not that two-factor authentication is a panacea for all security woes, but it does make life harder for hackers. Two-factor authentication works on codes, basically tokens, that are either sent via text or through an over-the-air authenticator (OTA). Text messaging is a pretty crap way to secure things, because the codes can be intercepted, but OTAs — like Google Authenticator or Authy — are sent via https (pretty much bulletproof, but requiring an app to use).

So using SMS-based two-factor authentication is better than nothing, but it’s not Fort Knox (however, these days, even Fort Knox probably isn’t Fort Knox when it comes to security).

Still, anything that makes things harder for crimes of opportunity can help ease the security burden for companies large and small, and the consumers and customers that love them (or at least are forced to pay and use them).

I’m not sure what form the perk could or should take. Maybe it’s the promise of a free e-book or a free download or an opportunity to have a live chat with the celebrity, influencer or athlete of a user’s choice. Whatever it is, there’re clearly something that businesses could do to encourage greater adoption.

Self-preservation isn’t cutting it. Maybe an emote will do the trick.

Australia bans Huawei and ZTE from supplying technology for its 5G network

Australia has blocked Huawei and ZTE from providing equipment for its 5G network, which is set to launch commercially next year. In a tweet, Huawei stated that the Australian government told the company that both it and ZTE are banned from supplying 5G technology to the country, despite Huawei’s assurances that it does not pose a threat to national security.

We have been informed by the Govt that Huawei & ZTE have been banned from providing 5G technology to Australia. This is a extremely disappointing result for consumers. Huawei is a world leader in 5G. Has safely & securely delivered wireless technology in Aust for close to 15 yrs

— Huawei Australia (@HuaweiOZ) August 22, 2018

Earlier today, the Australian government issued new security guidelines for 5G carriers. Although it did not mention Huawei, ZTE or China specifically, it did strongly hint at them by stating “the Government considers that the involvement of vendors who are likely to be subject to extrajudicial directions from foreign government that conflict with Australian law, may risk failure by the carrier to adequately protect a 5G network from unauthorized access or interference.”

Concerns that Huawei, ZTE and other Chinese tech companies will be forced to comply with a new law, passed last year, that obligates all Chinese organizations and citizens to provide information to national intelligence agencies when asked have made several countries wary of using their technology. Earlier this month, the United States banned the use of most Huawei and ZTE technology by government agencies and contractors, six years after a Congressional report first cited the two companies as security threats.

In its new security guidelines, the Australian government stated that differences in the way 5G operates compared to previous network generations introduces new risks to national security. In particular, it noted the diminishing distinctions between the core network, where more sensitive functions like access control and data routing occur, and the edge, or radios that connect customer equipment, like laptops and mobile phones, to the core.

“This new architecture provides a way to circumvent traditional security controls by exploiting equipment in the edge of the network – exploitation which may affect overall network integrity and availability, as well as the confidentiality of customer data. A long history of cyber incidents shows cyber actors target Australia and Australians,” the guidelines stated. “Government has found no combination of technical security controls that sufficiently mitigate the risks.”

Last year, Australia introduced the Telecommunications Sector Security Reforms (TSSR), which takes effect next month and directs carriers and telecommunication service providers to protect their networks and infrastructure from national security threats and also notify the government of any proposed changes that may compromise the security of their network. It also gives the government the power to “intervene and issue directions in cases where there are significant national security concerns that cannot be addressed through other means.”

Huawei’s Australian chairman John Lord said in June that the company had received legal advice that its Australian operations are not bound to Chinese laws and he would refuse to hand over any data to the Chinese government in breach of Australian law. Lord also argued that banning Huawei could hurt local businesses and customers by raising prices and limiting access to technology.

TechCrunch has contacted ZTE and Huawei for comment.