AI

Auto Added by WPeMatico

Longtime VC, and happy Miami resident, David Blumberg has a new $225 million fund

Blumberg Capital, founded in 1991 by investor David Blumberg, has just closed its fifth early-stage venture fund with $225 million, a vehicle that Blumberg says was oversubscribed — he planned to raise $200 million — and that has already been used to invest in 16 startups around the world (the firm has small offices in San Francisco, New York, Tel Aviv, and Miami, where Blumberg moved his family last year).

We caught up with him earlier this week to talk shop and he sounded almost ecstatic about the current market, which has evidently been good for returns, with Blumberg Capital’s biggest hits tied to Nutanix (it claims a 68x return), DoubleVerify (a 98x return at IPO in April, the firm says), Katapult (which went public via SPAC in July), Addepar (currently valued above $2 billion) and Braze (it submitted its S-1 in June).

We also talked a bit about his new life in Florida, which he was quick to note is “not a clone of Silicon Valley,” in case we had that idea. Not last, he told us why he thinks we’re in a “golden era of applying intelligence to every business,” from mining to the business of athletic performance.

More from our conversation, edited lightly for length and clarity, follows:

TC: What are you funding right now?

DB: Our last 30 to 40 deals have basically been about big data that’s been analyzed by artificial intelligence of some sort, then riding in a better wrapper of software process automation on rails of internet and mobility. Okay, that’s a lot of buzzwords.

TC: Yes.

DB: What I’m saying is that this ability to take raw information data that’s either been sitting around and not analyzed, or from new sources of data like sensors or social media or many other places, then analyze it and take it to all these businesses that have been there forever, is beginning to [have] incremental [impacts] that may sound small [but add up].

One of our [unannounced] companies applies AI to mining — lithium mining and gold and copper — so miners don’t waste their time before finding the richest vein of deposit. We partner with mining owners and we bring extra data that they don’t have access to — some is proprietary, some is public — and because we’re experts at the AI modeling of it, we can apply it to their geography and geology, and as part of the business model, we take part of the mine in return.

TC: So your fund now owns not just equity but part of a mine?

DB: This is evidently done a lot in what’s called E&P, exploration and production in the oil and gas industry, and we’re just following a time-tested model, where some of the service providers put in value and take out a share. So as we see it, it aligns our interests and the better we do for them, the better they do.

TC: This fund is around the same size of your fourth fund, which closed with $207 million, seemingly by design. How do you think about check sizes in this market?

DB: We write checks of $1 million to $6 million generally. We could go down a little bit for something in a seed where we can’t get more of a slice, but we like to have large ownership up front. We found that to have a fund return at least three x — and our funds seem to be returning much more than that — [we need to be math-minded about things].

We have 36 companies in our portfolio typically, and 20% of them fail, 20% of them are our superstars, and 60% are kind of medium. Of those superstars, six of them have to return $100 million each in a $200 million fund to make it a $600 million return, and to get six companies to [produce a] $100 million [for us] they have to reach a billion dollars in value, where we own 10% at the end.

TC You’re buying 10% and maintaining your pro rata or this is after being diluted over numerous rounds?

DB: It’s more like we want 15% to 20% of a company and it gets [diluted] down to 10%. And it’s been working. Some of our funds are way above that number.

TC: Are all four of your earlier funds in the black?

DB: Yes. I love to say this: We have never, ever lost money for our fund investors.

TC: You were among a handful of VCs who were cited quite a lot last year for hightailing it out of the Bay Area for Miami. One year into the move, how is it going?

DB: It is not a clone of Silicon Valley. They are different and add value each in their own way. But Florida is a great place for our family to be and I find for our business, it’s going to be great as well. I can be on the phone to Israel and New York without any time zone-related problems. Some of our companies are moving here, including one from from Israel recently, one from San Francisco, and one from Texas. A lot of our LPs are moving here or live here already. We can also up and down to South America for distribution deals more easily.

If we need to get to California or New York, airplanes still work, too, so it hasn’t been a negative at all. I’m going to a JPMorgan event tonight for a bunch of tech founders where there should be 150 people.

TC: That sounds great, though did you feel about summer in Miami?

DB: We were in France.

Pictured above, from left to right: Firm founder David Blumberg, managing director Yodfat Harel Buchris, COO Steve Gillan, and managing director Bruce Taragin.

The responsibilities of AI-first investors

Ash Fontana
Contributor

Ash Fontana, a managing director at Zetta Ventures, is the author of “The AI-First Company: How to Compete and Win with Artificial Intelligence.”
More posts by this contributor

Investors in AI-first technology companies serving the defense industry, such as Palantir, Primer and Anduril, are doing well. Anduril, for one, reached a valuation of over $4 billion in less than four years. Many other companies that build general-purpose, AI-first technologies — such as image labeling — receive large (undisclosed) portions of their revenue from the defense industry.

Investors in AI-first technology companies that aren’t even intended to serve the defense industry often find that these firms eventually (and sometimes inadvertently) help other powerful institutions, such as police forces, municipal agencies and media companies, prosecute their duties.

Most do a lot of good work, such as DataRobot helping agencies understand the spread of COVID, HASH running simulations of vaccine distribution or Lilt making school communications available to immigrant parents in a U.S. school district.

The first step in taking responsibility is knowing what on earth is going on. It’s easy for startup investors to shrug off the need to know what’s going on inside AI-based models.

However, there are also some less positive examples — technology made by Israeli cyber-intelligence firm NSO was used to hack 37 smartphones belonging to journalists, human-rights activists, business executives and the fiancée of murdered Saudi journalist Jamal Khashoggi, according to a report by The Washington Post and 16 media partners. The report claims the phones were on a list of over 50,000 numbers based in countries that surveil their citizens and are known to have hired the services of the Israeli firm.

Investors in these companies may now be asked challenging questions by other founders, limited partners and governments about whether the technology is too powerful, enables too much or is applied too broadly. These are questions of degree, but are sometimes not even asked upon making an investment.

I’ve had the privilege of talking to a lot of people with lots of perspectives — CEOs of big companies, founders of (currently!) small companies and politicians — since publishing “The AI-First Company” and investing in such firms for the better part of a decade. I’ve been getting one important question over and over again: How do investors ensure that the startups in which they invest responsibly apply AI?

Let’s be frank: It’s easy for startup investors to hand-wave away such an important question by saying something like, “It’s so hard to tell when we invest.” Startups are nascent forms of something to come. However, AI-first startups are working with something powerful from day one: Tools that allow leverage far beyond our physical, intellectual and temporal reach.

AI not only gives people the ability to put their hands around heavier objects (robots) or get their heads around more data (analytics), it also gives them the ability to bend their minds around time (predictions). When people can make predictions and learn as they play out, they can learn fast. When people can learn fast, they can act fast.

Like any tool, one can use these tools for good or for bad. You can use a rock to build a house or you can throw it at someone. You can use gunpowder for beautiful fireworks or firing bullets.

Substantially similar, AI-based computer vision models can be used to figure out the moves of a dance group or a terrorist group. AI-powered drones can aim a camera at us while going off ski jumps, but they can also aim a gun at us.

This article covers the basics, metrics and politics of responsibly investing in AI-first companies.

The basics

Investors in and board members of AI-first companies must take at least partial responsibility for the decisions of the companies in which they invest.

Investors influence founders, whether they intend to or not. Founders constantly ask investors about what products to build, which customers to approach and which deals to execute. They do this to learn and improve their chances of winning. They also do this, in part, to keep investors engaged and informed because they may be a valuable source of capital.

Kapacity.io is using AI to drive energy and emissions savings for real estate

Y Combinator-backed Kapacity.io is on a mission to accelerate the decarbonization of buildings by using AI-generated efficiency savings to encourage electrification of commercial real estate — wooing buildings away from reliance on fossil fuels to power their heating and cooling needs.

It does this by providing incentives to building owners/occupiers to shift to clean energy usage through a machine learning-powered software automation layer.

The startup’s cloud software integrates with buildings’ HVAC systems and electricity meters — drawing on local energy consumption data to calculate and deploy real-time adjustments to heating/cooling systems which not only yield energy and (CO2) emissions savings but generate actual revenue for building owners/tenants — paying them to reduce consumption such as at times of peak energy demand on the grid.

“We are controlling electricity consumption in buildings, focusing on heating and cooling devices — using AI machine learning to optimize and find the best ways to consume electricity,” explains CEO and co-founder Jaakko Rauhala, a former consultant in energy technology. “The actual method is known as ‘demand response’. Basically that is a way for electricity consumers to get paid for adjusting their energy consumption, based on a utility company’s demand.

“For example if there is a lot of wind power production and suddenly the wind drops or the weather changes and the utility company is running power grids they need to balance that reduction — and the way to do that is either you can fire up natural gas turbines or you can reduce power consumption… Our product estimates how much can we reduce electricity consumption at any given minute. We are [targeting] heating and cooling devices because they consume a lot of electricity.”

“The way we see this is this is a way we can help our customers electrify their building stocks faster because it makes their investments more lucrative and in addition we can then help them use more renewable electricity because we can shift the use from fossil fuels to other areas. And in that we hope to help push for a more greener power grid,” he adds.

Kapcity’s approach is applicable in deregulated energy markets where third parties are able to play a role offering energy saving services and fluctuations in energy demand are managed by an auction process involving the trading of surplus energy — typically overseen by a transmission system operator — to ensure energy producers have the right power balance to meet customer needs.

Demand for energy can fluctuate regardless of the type of energy production feeding the grid but renewable energy sources tend to increase the volatility of energy markets as production can be less predictable versus legacy energy generation (like nuclear or burning fossil fuels) — wind power, for example, depends on when and how strongly the wind is blowing (which both varies and isn’t perfectly predictable). So as economies around the world dial up efforts to tackle climate change and hit critical carbon emissions reduction targets there’s growing pressure to shift away from fossil fuel-based power generation toward cleaner, renewable alternatives. And the real estate sector specifically remains a major generator of CO2, so is squarely in the frame for “greening”.

Simultaneously, decarbonization and the green shift looks likely to drive demand for smart solutions to help energy grids manage increasing complexity and volatility in the energy supply mix.

“Basically more wind power — and solar, to some extent — correlates with demand for balancing power grids and this is why there is a lot of talk usually about electricity storage when it comes to renewables,” says Rauhala. “Demand response, in the way that we do it, is an alternative for electricity storage units. Basically we’re saying that we already have a lot of electricity consuming devices — and we will have more and more with electrification. We need to adjust their consumption before we invest billions of dollars into other systems.”

“We will need a lot of electricity storage units — but we try to push the overall system efficiency to the maximum by utilising what we already have in the grid,” he adds.

There are of course limits to how much “adjustment” (read: switching off) can be done to a heating or cooling system by even the cleverest AI without building occupants becoming uncomfortable.

But Kapacity’s premise is that small adjustments — say turning off the boilers/coolers for five, 15 or 30 minutes — can go essentially unnoticed by building occupants if done right, allowing the startup to tout a range of efficiency services for its customers; such as a peak-shaving offering, which automatically reduces energy usage to avoid peaks in consumption and generate significant energy cost savings.

“Our goal — which is a very ambitious goal — is that the customers and occupants in the buildings wouldn’t notice the adjustments. And that they would fall into the normal range of temperature fluctuations in a building,” says Rauhala.

Kapacity’s algorithms are designed to understand how to make dynamic adjustments to buildings’ heating/cooling without compromising “thermal comfort”, as Rauhala puts it — noting that co-founder (and COO) Sonja Salo, has both a PhD in demand response and researched thermal comfort during a stint as a visiting researcher at UC Berkley — making the area a specialist focus for the engineer-led founding team.

At the same time, the carrots it’s dangling at the commercial real estate to sign up for a little algorithmic HVAC tweaking look substantial: Kapacity says its system has been able to achieve a 25% reduction in electricity costs and a 10% reduction in CO2-emissions in early pilots. Although early tests have been limited to its home market for now.

Its other co-founder, Rami El Geneidy, researched smart algorithms for demand response involving heat pumps for his PhD dissertation — and heat pumps are another key focus for the team’s tech, per Rauhala.

Heat pumps are a low-carbon technology that’s fairly commonly used in the Nordics for heating buildings, but whose use is starting to spread as countries around the world look for greener alternatives to heat buildings.

In the U.K., for example, the government announced a plan last year to install hundreds of thousands of heat pumps per year by 2028 as it seeks to move the country away from widespread use of gas boilers to heat homes. And Rauhala names the U.K. as one of the startup’s early target markets — along with the European Union and the U.S., where they also envisage plenty of demand for their services.

While the initial focus is the commercial real estate sector, he says they are also interested in residential buildings — noting that from a “tech core point of view we can do any type of building”.

“We have been focusing on larger buildings — multifamily buildings, larger office buildings, certain types of industrial or commercial buildings so we don’t do single-family detached homes at the moment,” he goes on, adding: “We have been looking at that and it’s an interesting avenue but our current pilots are in larger buildings.”

The Finnish startup was only founded last year — taking in a pre-seed round of funding from Nordic Makers prior to getting backing from YC — where it will be presenting at the accelerator’s demo day next week. (But Rauhala won’t comment on any additional fund raising plans at this stage.)

He says it’s spun up five pilot projects over the last seven months involving commercial landlords, utilities, real estate developers and engineering companies (all in Finland for now), although — again — full customer details are not yet being disclosed. But Rauhala tells us they expect to move to their first full commercial deals with pilot customers this year.

“The reason why our customers are interested in using our products is that this is a way to make electrification cheaper because they are being paid for adjusting their consumption and that makes their operating cost lower and it makes investments more lucrative if — for example — you need to switch from natural gas boilers to heat pumps so that you can decarbonize your building,” he also tells us. “If you connect the new heat pump running on electricity — if you connect that to our service we can reduce the operating cost and that will make it more lucrative for everybody to electrify their buildings and run their systems.

“We can also then make their electricity consumed more sustainable because we are shifting consumption away from hours with most CO2 emissions on the grid. So we try to avoid the hours when there’s a lot of fossil fuel-based production in the grid and try to divert that into times when we have more renewable electricity.

“So basically the big question we are asking is how do we increase the use of renewables and the way to achieve that is asking when should we consume? Well we should consume electricity when we have more renewable in the grid. And that is the emission reduction method that we are applying here.”

In terms of limitations, Kapacity’s software-focused approach can’t work in every type of building — requiring that real estate customers have some ability to gather energy consumption (and potentially temperature) data from their buildings remotely, such as via IoT devices.

“The typical data that we need is basic information on the heating system — is it running at 100% or 50% or what’s the situation? That gets us pretty far,” says Rauhala. “Then we would like to know indoor temperatures. But that is not mandatory in the sense that we can still do some basic adjustments without that.”

It also of course can’t offer much in the way of savings to buildings that are running 100% on natural gas (or oil) — i.e. with electricity only used for lighting (turning lights off when people are inside buildings obviously wouldn’t fly); there must be some kind of air conditioning, cooling or heat pump systems already installed (or the use of electric hot water boilers).

“An old building that runs on oil or natural gas — that’s a target for decarbonization,” he continues. “That’s a target where you could consider installing heat pumps and that is where we could help some of our customers or potential customers to say OK we need to estimate how much would it cost to install a heat pump system here and that’s where our product can come in and we can say you can reduce the operating cost with demand response. So maybe we should do something together here.”

Rauhala also confirms that Kapacity’s approach does not require invasive levels of building occupant surveillance, telling TechCrunch: “We don’t collect information that is under GDPR [General Data Protection Regulation], I’ll put it that way. We don’t take personal data for this demand response.”

So any guestimates its algorithms are making about building occupants’ tolerance for temperature changes are, therefore, not going to be based on specific individuals — but may, presumably, factor in aggregated information related to specific industry/commercial profiles.

The Helsinki-based startup is not the only one looking at applying AI to drive energy cost and emissions savings in the commercial buildings sector — another we spoke to recently is Düsseldorf-based Dabbel, for example. And plenty more are likely to take an interest in the space as governments start to pump more money into accelerating decarbonization.

Asked about competitive differentiation, Rauhala points to a focus on real-time adjustments and heat pump technologies.

“One of our key things is we’re developing a system so that we can do close to real-time control — very, very short-term control. That is a valuable service to the power grid so we can then quickly adjust,” he says. “And the other one is we are focusing on heat pump technologies to get started — heat pumps here in the Nordics are a very common and extremely good way to decarbonize and understanding how we can combine these to demand response with new heat pumps that is where we see a lot of advantages to our approach.”

“Heat pumps are a bit more technically complex than your basic natural gas boiler so there are certain things that have to be taken it account and that is where we have been focusing our efforts,” he goes on, adding: “We see heat pumps as an excellent way to decarbonize the global building stock and we want to be there and help make that happen.”

Per capita, the Nordics has the most heat pump installations, according to Rauhala — including a lot of ground source heat pump installations which can replace fossil fuel consumption entirely.

“You can run your building with a ground source heat pump system entirely — you don’t need any supporting systems for it. And that is the area where we here in Europe are more far ahead than in the U.S.,” he says on that.

“The U.K. government is pushing for a lot of heat pump installations and there are incentives in place for people to replace their existing natural gas systems or whatever they have. So that is very interesting from our point of view. The U.K. also has a lot of wind power coming online and there have been days when the U.K. has been running 100% with renewable electricity which is great. So that actually is a really good thing for us. But then in the longer term in the U.S. — Seattle, for example, has banned the use of fossil fuels in new buildings so I’m very confident that the market in the U.S. will open up more and quickly. There’s a lot of opportunities in that space as well.

“And of course from a cooling perspective air conditioning in general in the U.S. is very widespread — especially in commercial buildings so that is already an existing opportunity for us.”

“My estimate on how valuable electricity use for heating and cooling is it’s tens of billions of dollars annually in the U.S. and EU,” he adds. “There’s a lot of electricity being used already for this and we expect the market to grow significantly.”

On the business model front, the startup’s cloud software looks set to follow a SaaS model but the plan is also to take a commission of the savings and/or generated income from customers. “We also have the option to provide the service with a fixed fee, which might be easier for some customers, but we expect the majority to be under a commission,” adds Rauhala.

Looking ahead, were the sought-for global shift away from fossil fuels to be wildly successful — and all commercial buildings’ gas/oil boilers got replaced with 100% renewable power systems in short order — there would still be a role for Kapacity’s control software to play, generating energy cost savings for its customers, even though our (current) parallel pressing need to shrink carbon emissions would evaporate in this theoretical future.

“We’d be very happy,” says Rauhala. “The way we see emission reductions with demand response now is it’s based on the fact that we do still have fossil fuels power system — so if we were to have a 100% renewable power system then the electricity does nothing to reduce emissions from the electricity consumption because it’s all renewable. So, ironically, in the future we see this as a way to push for a renewable energy system and makes that transition happen even faster. But if we have a 100% renewable system then there’s nothing [in terms of CO2 emissions] we can reduce but that is a great goal to achieve.”

 

Cardiomatics bags $3.2M for its ECG-reading AI

Poland-based healthtech AI startup Cardiomatics has announced a $3.2M seed raise to expand use of its electrocardiogram (ECG) reading automation technology.

The round is led by Central and Eastern European VC Kaya, with Nina Capital, Nova Capital and Innovation Nest also participating.

The seed raise also includes a $1M non-equity grant from the Polish National Centre of Research and Development.

The 2017-founded startup sells a cloud tool to speed up diagnosis and drive efficiency for cardiologists, clinicians and other healthcare professionals to interpret ECGs — automating the detection and analyse of some 20 heart abnormalities and disorders with the software generating reports on scans in minutes, faster than a trained human specialist would be able to work.

Cardiomatics touts its tech as helping to democratize access to healthcare — saying the tool enables cardiologists to optimise their workflow so they can see and treat more patients. It also says it allows GPs and smaller practices to offer ECG analysis to patients without needing to refer them to specialist hospitals.

The AI tool has analyzed more than 3 million hours of ECG signals commercially to date, per the startup, which says its software is being used by more than 700 customers in 10+ countries, including Switzerland, Denmark, Germany and Poland.

The software is able to integrate with more than 25 ECG monitoring devices at this stage, and it touts offering a modern cloud software interface as a differentiator vs legacy medical software.

Asked how the accuracy of its AI’s ECG readings has been validated, the startup told us: “The data set that we use to develop algorithms contains more than 10 billion heartbeats from approximately 100,000 patients and is systematically growing. The majority of the data-sets we have built ourselves, the rest are publicly available databases.

“Ninety percent of the data is used as a training set, and 10% for algorithm validation and testing. According to the data-centric AI we attach great importance to the test sets to be sure that they contain the best possible representation of signals from our clients. We check the accuracy of the algorithms in experimental work during the continuous development of both algorithms and data with a frequency of once a month. Our clients check it everyday in clinical practice.”

Cardiomatics said it will use the seed funding to invest in product development, expand its business activities in existing markets and gear up to launch into new markets.

“Proceeds from the round will be used to support fast-paced expansion plans across Europe, including scaling up our market-leading AI technology and ensuring physicians have the best experience. We prepare the product to launch into new markets too. Our future plans include obtaining FDA certification and entering the US market,” it added.

The AI tool received European medical device certification in 2018 — although it’s worth noting that the European Union’s regulatory regime for medical devices and AI is continuing to evolve, with an update to the bloc’s Medial Devices Directive (now known as the EU Medical Device Regulation) coming into application earlier this year (May).

A new risk-based framework for applications of AI — aka the Artificial Intelligence Act — is also incoming and will likely expand compliance demands on AI healthtech tools like Cardiomatics, introducing requirements such as demonstrating safety, reliability and a lack of bias in automated results.

Asked about the regulatory landscape it said: “When we launched in 2018 we were one of the first AI-based solutions approved as medical device in Europe. To stay in front of the pace we carefully observe the situation in Europe and the process of legislating a risk-based framework for regulating applications of AI. We also monitor draft regulations and requirements that may be introduced soon. In case of introducing new standards and requirements for artificial intelligence, we will immediately undertake their implementation in the company’s and product operations, as well as extending the documentation and algorithms validation with the necessary evidence for the reliability and safety of our product.”

However it also conceded that objectively measuring efficacy of ECG reading algorithms is a challenge.

“An objective assessment of the effectiveness of algorithms can be very challenging,” it told TechCrunch. “Most often it is performed on a narrow set of data from a specific group of patients, registered with only one device. We receive signals from various groups of patients, coming from different recorders. We are working on this method of assessing effectiveness. Our algorithms, which would allow them to reliably evaluate their performance regardless of various factors accompanying the study, including the recording device or the social group on which it would be tested.”

“When analysis is performed by a physician, ECG interpretation is a function of experience, rules and art. When a human interprets an ECG, they see a curve. It works on a visual layer. An algorithm sees a stream of numbers instead of a picture, so the task becomes a mathematical problem. But, ultimately, you cannot build effective algorithms without knowledge of the domain,” it added. “This knowledge and the experience of our medical team are a piece of art in Cardiomatics. We shouldn’t forget that algorithms are also trained on the data generated by cardiologists. There is a strong correlation between the experience of medical professionals and machine learning.”

How we built an AI unicorn in 6 years

Alex Dalyac
Contributor

Alex Dalyac is the CEO and co-founder of Tractable, which develops artificial intelligence for accident and disaster recovery.

Today, Tractable is worth $1 billion. Our AI is used by millions of people across the world to recover faster from road accidents, and it also helps recycle as many cars as Tesla puts on the road.

And yet six years ago, Tractable was just me and Raz (Razvan Ranca, CTO), two college grads coding in a basement. Here’s how we did it, and what we learned along the way.

Build upon a fresh technological breakthrough

In 2013, I was fortunate to get into artificial intelligence (more specifically, deep learning) six months before it blew up internationally. It started when I took a course on Coursera called “Machine learning with neural networks” by Geoffrey Hinton. It was like being love struck. Back then, to me AI was science fiction, like “The Terminator.”

Narrowly focusing on a branch of applied science that was undergoing a paradigm shift which hadn’t yet reached the business world changed everything.

But an article in the tech press said the academic field was amid a resurgence. As a result of 100x larger training data sets and 100x higher compute power becoming available by reprogramming GPUs (graphics cards), a huge leap in predictive performance had been attained in image classification a year earlier. This meant computers were starting to be able to understand what’s in an image — like humans do.

The next step was getting this technology into the real world. While at university — Imperial College London — teaming up with much more skilled people, we built a plant recognition app with deep learning. We walked our professor through Hyde Park, watching him take photos of flowers with the app and laughing from joy as the AI recognized the right plant species. This had previously been impossible.

I started spending every spare moment on image classification with deep learning. Still, no one was talking about it in the news — even Imperial’s computer vision lab wasn’t yet on it! I felt like I was in on a revolutionary secret.

Looking back, narrowly focusing on a branch of applied science undergoing a breakthrough paradigm shift that hadn’t yet reached the business world changed everything.

Search for complementary co-founders who will become your best friends

I’d previously been rejected from Entrepreneur First (EF), one of the world’s best incubators, for not knowing anything about tech. Having changed that, I applied again.

The last interview was a hackathon, where I met Raz. He was doing machine learning research at Cambridge, had topped EF’s technical test, and published papers on reconstructing shredded documents and on poker bots that could detect bluffs. His bare-bones webpage read: “I seek data-driven solutions to currently intractable problems.” Now that had a ring to it (and where we’d get the name for Tractable).

That hackathon, we coded all night. The morning after, he and I knew something special was happening between us. We moved in together and would spend years side by side, 24/7, from waking up to Pantera in the morning to coding marathons at night.

But we also wouldn’t have got where we are without Adrien (Cohen, president), who joined as our third co-founder right after our seed round. Adrien had previously co-founded Lazada, an online supermarket in South East Asia like Amazon and Alibaba, which sold to Alibaba for $1.5 billion. Adrien would teach us how to build a business, inspire trust and hire world-class talent.

Find potential customers early so you can work out market fit

Tractable started at EF with a head start — a paying customer. Our first use case was … plastic pipe welds.

It was as glamorous as it sounds. Pipes that carry water and natural gas to your home are made of plastic. They’re connected by welds (melt the two plastic ends, connect them, let them cool down and solidify again as one). Image classification AI could visually check people’s weld setups to ensure good quality. Most of all, it was real-world value for breakthrough AI.

And yet in the end, they — our only paying customer — stopped working with us, just as we were raising our first round of funding. That was rough. Luckily, the number of pipe weld inspections was too small a market to interest investors, so we explored other use cases — utilities, geology, dermatology and medical imaging.

Visualping raises $6M to make its website change monitoring service smarter

Visualping, a service that can help you monitor websites for changes like price drops or other updates, announced that it has raised a $6 million extension to the $2 million seed round it announced earlier this year. The round was led by Seattle-based FUSE Ventures, a relatively new firm with investors who spun out of Ignition Partners last year. Prior investors Mistral Venture Partners and N49P also participated.

The Vancouver-based company is part of the current Google for Startups Accelerator class in Canada. This program focuses on services that leverage AI and machine learning, and, while website monitoring may not seem like an obvious area where machine learning can add a lot of value, if you’ve ever used one of these services, you know that they can often unleash a plethora of false alerts. For the most part, after all, these tools simply look for something in a website’s underlying code to change and then trigger an alert based on that (and maybe some other parameters you’ve set).

Image Credits: Visualping

Earlier this week, Visualping launched its first machine learning-based tools to avoid just that. The company argues that it can eliminate up to 80% of false alerts by combining feedback from its more than 1.5 million users with its new ML algorithms. Thanks to this, Visualping can now learn the best configuration for how to monitor a site when users set up a new alert.

“Visualping has the hearts of over a million people across the world, as well as the vast majority of the Fortune 500. To be a part of their journey and to lead this round of financing is a dream,” FUSE’s Brendan Wales said.

Visualping founder and CEO Serge Salager tells me that the company plans to use the new funding to focus on building out its product but also to build a commercial team. So far, he said, the company’s growth has been primarily product led.

As a part of these efforts, the company also plans to launch Visualping Business, with support for these new ML tools and additional collaboration features, and Visualping Personal for individual users who want to monitor things like ticket availability for concerts or to track news, price drops or job postings, for example. For now, the personal plan will not include support for ML. “False alerts are not a huge problem for personal use as people are checking two-three websites but a huge problem for enterprise where teams need to process hundreds of alerts per day,” Salager told me.

The current idea is to launch these new plans in November, together with mobile apps for iOS and Android. The company will also relaunch its extensions around this time, too.

It’s also worth noting that while Visualping monetizes its web-based service, you can still use the extension in the browser for free.

Quantexa raises $153M to build out AI-based big data tools to track risk and run investigations

As financial crime has become significantly more sophisticated, so too have the tools that are used to combat it. Now, Quantexa — one of the more interesting startups that has been building AI-based solutions to help detect and stop money laundering, fraud, and other illicit activity — has raised a growth round of $153 million, both to continue expanding that business in financial services and to bring its tools into a wider context, so to speak: linking up the dots around all customer and other data.

“We’ve diversified outside of financial services and working with government, healthcare, telcos and insurance,” Vishal Marria, its founder and CEO, said in an interview. “That has been substantial. Given the whole journey that the market’s gone through in contextual decision intelligence as part of bigger digital transformation, was inevitable.”

The Series D values the London-based startup between $800 million and $900 million on the heels of Quantexa growing its subscriptions revenues 108% in the last year.

Warburg Pincus led the round, with existing backers Dawn Capital, AlbionVC, Evolution Equity Partners (a specialist cybersecurity VC), HSBC, ABN AMRO Ventures and British Patient Capital also participating. The valuation is a significant hike up for Quantexa, which was valued between $200 million and $300 million in its Series C last July. It has now raised over $240 million to date.

Quantexa got its start out of a gap in the market that Marria identified when he was working as a director at Ernst & Young tasked with helping its clients with money laundering and other fraudulent activity. As he saw it, there were no truly useful systems in the market that efficiently tapped the world of data available to companies — matching up and parsing both their internal information as well as external, publicly available data — to get more meaningful insights into potential fraud, money laundering and other illegal activities quickly and accurately.

Quantexa’s machine learning system approaches that challenge as a classic big data problem — too much data for a humans to parse on their own, but small work for AI algorithms processing huge amounts of that data for specific ends.

Its so-called “Contextual Decision Intelligence” models (the name Quantexa is meant to evoke “quantum” and “context”) were built initially specifically to address this for financial services, with AI tools for assessing risk and compliance and identifying financial criminal activity, leveraging relationships that Quantexa has with partners like Accenture, Deloitte, Microsoft and Google to help fill in more data gaps.

The company says its software — and this, not the data, is what is sold to companies to use over their own datasets — has handled up to 60 billion records in a single engagement. It then presents insights in the form of easily digestible graphs and other formats so that users can better understand the relationships between different entities and so on.

Today, financial services companies still make up about 60% of the company’s business, Marria said, with 7 of the top 10 UK and Australian banks and 6 of the top 14 financial institutions in North America among its customers. (The list includes its strategic backer HSBC, as well as Standard Chartered Bank and Danske Bank.)

But alongside those — spurred by a huge shift in the market to relying significantly more on wider data sets, to businesses updating their systems in recent years, and the fact that, in the last year, online activity has in many cases become the “only” activity — Quantexa has expanded more significantly into other sectors.

“The Financial crisis [of 2007] was a tipping point in terms of how financial services companies became more proactive, and I’d say that the pandemic has been a turning point around other sectors like healthcare in how to become more proactive,” Marria said. “To do that you need more data and insights.”

So in the last year in particular, Quantexa has expanded to include other verticals facing financial crime, such as healthcare, insurance, government (for example in tax compliance), and telecoms/communications, but in addition to that, it has continued to diversify what it does to cover more use cases, such as building more complete customer profiles that can be used for KYC (know your customer) compliance or to serve them with more tailored products. Working with government, it’s also seeing its software getting applied to other areas of illicit activity, such as tracking and identifying human trafficking.

In all, Quantexa has “thousands” of customers in 70 markets. Quantexa cites figures from IDC that estimate the market for such services — both financial crime and more general KYC services — is worth about $114 billion annually, so there is still a lot more to play for.

“Quantexa’s proprietary technology enables clients to create single views of individuals and entities, visualized through graph network analytics and scaled with the most advanced AI technology,” said Adarsh Sarma, MD and co-head of Europe at Warburg Pincus, in a statement. “This capability has already revolutionized the way KYC, AML and fraud processes are run by some of the world’s largest financial institutions and governments, addressing a significant gap in an increasingly important part of the industry. The company’s impressive growth to date is a reflection of its invaluable value proposition in a massive total available market, as well as its continued expansion across new sectors and geographies.”

Interestingly, Marria admitted to me that the company has been approached by big tech companies and others that work with them as an acquisition target — no real surprises there — but longer term, he would like Quantexa to consider how it continues to grow on its own, with an independent future very much in his distant sights.

“Sure, an acquisition to the likes of a big tech company absolutely could happen, but I am gearing this up for an IPO,” he said.

Didi gets hit by Chinese government, and Pelo raises $150M

Hello and welcome back to Equity, TechCrunch’s venture-capital-focused podcast where we unpack the numbers behind the headlines.

This is Equity Monday Tuesday, our weekly kickoff that tracks the latest private market news, talks about the coming week, digs into some recent funding rounds and mulls over a larger theme or narrative from the private markets. You can follow the show on Twitter here and myself here.

What a busy weekend we missed while mostly hearing distant explosions and hugging our dogs close. Here’s a sampling of what we tried to recap on the show:

It’s going to be a busy week! Chat tomorrow.

Equity drops every Monday at 7:00 a.m. PST, Wednesday, and Friday at 6:00 a.m. PST, so subscribe to us on Apple PodcastsOvercastSpotify and all the casts!

Uber’s first head of data science just raised a new venture fund to back nascent AI startups

Kevin Novak joined Uber as its 21st employee its seventh engineer in 2011, and by 2014, he was the company’s head of data science. He talks proudly of that time, but like all good things, it ran its course and by the end of 2017, having accomplished what he wanted at the company, he left.

At first, he picked up the pace of his angel investing, work he’d already begun focusing on during weekends and evenings, ultimately building a portfolio of more than 50 startups (including the fintech Pipe and the autonomous checkout company Standard Cognition).

He also began advising both startups and venture firms — including Playground Global, Costanoa Ventures, Renegade Partners and Data Collective — and after falling in love with the work, Novak this year decided to launch his own venture outfit in Menlo Park, Ca., called Rackhouse Venture Capital. Indeed, Rackhouse just closed its debut fund with $15 million, anchored by Uber’s first head of engineering, Curtis Chambers; Steve Gilula, a former chairman of Searchlight Pictures, and the fund of funds Cendana Capital. A lot of the VCs Novak knows are also investors in the fund.

We caught up with Novak late last week to chat out that new vehicle. We also talked about this tenure at Uber, where, be warned, he played a major role in creating surge pricing (though he prefers the term “dynamic pricing.”) You can hear that fuller discussion or check out excerpts from it, edited lightly for length and clarity, below.

TC: You were planning to become a nuclear physicist. How did you wind up at Uber?

KN: As an undergrad, I was studying physics, math and computer science, and when I got to grad school, I really wanted to teach. But I also really liked programming and applying physics concepts in the programming space, and the nuke department had the largest allocation of supercomputer time, so that ended up driving a lot of my research  — just the opportunity to play on computers while doing physics. So [I] was studying to become a nuclear physicist was funded very indirectly through the research that eventually became the Higgs boson. As the Higgs got discovered, it was very good for humanity and absolutely horrible for my research budget . . .

A friend of mine heard what I was doing and sort of knew my skill set and said, like, ‘Hey, you should come check out this Uber cab company that it’s like a limo company with an app. There’s a very interesting data problem and a very interesting math problem.’ So I ended up applying [though I committed] the cardinal sin of startup applications and wore a suit and tie to my interview.

TC: You’re from Michigan. I also grew up in the Midwest so appreciate why you might think that people would wear a suit to an interview.

KN: I got off the elevator and the friend who’d encouraged me to apply was like, ‘What are you wearing?!’ But I got asked to join nonetheless as a computational algorithms engineer — a title that predated the data science trend — and I spent the next couple of years living in the engineering and product world, building data features and . . .things like our ETA engine, basically predicting how long it would take an Uber to get to you. One of my very first projects was working on tolls and tunnels because figuring out which tunnel an Uber went through and how to build time and distance was a common failure point. So I spent, like, three days driving the Big Dig in Boston out to Somerville and back to Logan with a bunch of phones, collecting GPS data.

I got to know a lot of very random facts about Uber cities, but my big claim to fame was dynamic pricing. . . and it turned out to be a really successful cornerstone for the strategy of making sure Ubers were available.

TC: How does that go over, when you tell people that you invented surge pricing?

KN: It’s a very quick litmus test to figure out like people’s underlying enthusiasm for behavioral econ and finance. The Wall Street crowd is like, ‘Oh my god, that’s so cool.’ And then a lot of people are like, ‘Oh, thank you, yeah, thank you so much, wonderful, you buy the next round of drinks’ type of thing. . . [Laughs.]

But data also became the incubation space for a lot of the early special projects like Uber pool and a lot of the ideas around, okay, how would you build a dispatching model that enables different people with pooled ride requests? How do you batch them together efficiently in space and time so that we can get the right match rate that [so this] project is profitable? We did a lot of work on the theory behind the hub-and-spoke Uber Eats delivery models and thinking through how we apply our learnings about ride-share to food. So I got the first person perspective on a lot of these products when it was literally three people scribbling on a notepad or riffing on a laptop over lunch, [and which] eventually went on to become these big, nationwide businesses.

TC: You were working on Uber Freight for the last nine months of your career with Uber, so there when this business with Anthony Levandowski was blowing up.

KN: Yeah, it was it was very interesting era for me because more than six years in, [I was already developing the] attitude of ‘I’ve done everything I wanted to do.’ I joined a 20-person company and, at the time, we were closing in on 20,000 people . . .and I kind of missed the small team dynamic and felt like I was hitting a natural stopping point. And then Uber’s 2017 happened and and there was Anthony, there was Susan Fowler, and Travis has this horrific accident in his personal life and his head was clearly not in the game. But I didn’t want to be the guy who was known for bailing in the worst quarter of the company’s history, so I ended up spending the next year basically keeping the band together and trying to figure out what I could do to keep whatever small part of the company I was running intact and motivated and empathetic and good in every sense of the word.

TC: You left at the end of that year and it seems you’ve been very busy since, including, now, launching this new fund with the backing of outsiders. Why call it Rackhouse? You used the brand Jigsaw Venture Capital when you were investing your own money.

KN: Yeah. A year [into angel investing], I had formed an LLC, I was “marking” my portfolio to market, sending quarterly updates to myself and my accountant and my wife. It was one of these exercises that was a carryover from how I was training managers, in that I think you grow most efficiently and successfully if you can develop a few skills at a time. So I was trying to figure out what it would take to run my own back office, even if it was just moving my money from my checking account to my “investing account,” and writing my own portfolio update.

I was really excited about the possibility of launching my first externally facing fund with other people’s money under the Jigsaw banner, too, but there’s actually a fund in the UK [named Jigsaw] and as I started to talk to LPs and was saying ‘Look, I want to do this data fund and I want it to be early stage,’ I’d get calls from them being like, ‘We just saw that Jigsaw did this Series D in Crowdstrike.’ I realized I’d be competing with the other Jigsaw from a mindshare perspective, so figured before things go too big and crazy, I’d create my own distinct brand.

TC: Did you roll any of your angel-backed deals into the new fund? I see Rackhouse has 13 portfolio companies.

KN: There are a few that I’ve agreed to move forward and warehouse for the fund, and we’re just going through the technicalities of doing that right now.

TC: And the focus is on machine learning and AI.

KN: That’s right, and I think there are amazing opportunities outside of the traditional areas of industry focus that, to the extent that you can find like rigorous applications of AI,  are also going to be significantly less competitive. [Deals] that don’t fall in the strike zone of nearly as many [venture] firms is the game I want to be playing. I feel like that that opportunity — regardless of sector, regardless of geography — biases toward domain experts.

TC: I wonder if that also explains the size of your fund — your wanting to stay out of the strike zone of most venture firms.

KN: I want to make sure that I build a fund that enables me to be an active participant in the earliest stages of companies.

Matt Ocko and Zack Bogue [of Data Collective] are good friends of mine — they’re mentors, in fact, and small LPs in the fund and talked with me about how they got started. But now they have a billion-plus [dollars] in assets under management, and he people I [like to back] are two people who are moonlighting and getting ready to take the plunge and [firms the size of Data Collective] have basically priced themselves out of the formation and pre-seed stage, and I like that stage. It’s something where I have a lot of useful experience. I also think it’s the stage where, if you come from a place of domain expertise, you don’t need five quarters of financials to get conviction.

Trigo bags $10M for computer-vision based checkout tech to rival Amazon’s ‘Just Walk Out’

While Amazon continues to expand its self-service, computer-vision-based grocery checkout technology by bringing it to bigger stores, an AI startup out of Israel that’s built something to rival it has picked up funding and a new strategic investor as a customer.

Trigo, which has produced a computer vision system that includes both camera hardware and encrypted, privacy-compliant software to enable “grab and go” shopping — where customers can pick up items that get automatically detected and billed before they leave the store — has bagged $10 million in funding from German supermarket chain REWE Group and Viola Growth.

The exact amount of the investment was not being disclosed (perhaps because $10 million, in these crazy times, suddenly sounds like a modest amount?), but Pitchbook notes that Trigo had up to now raised $87 million, and Trigo has confirmed that it has now raised “over $100 million,” including a Series A in 2019, and a Series B of $60 million that it raised in December of last year. The company has confirmed that the amount raised is $10 million today, and $104 million in total.

The company is not disclosing its valuation. We have asked and will update as we learn more.

“Trigo is immensely proud and honored to be deepening its strategic partnership with REWE Group, one of Europe’s biggest and most innovative grocery retailers,” said Michael Gabay, Trigo co-founder and CEO, in a statement. “REWE have placed their trust in Trigo’s privacy-by-design architecture, and we look forward to bringing this exciting technology to German grocery shoppers. We are also looking forward to working with Viola Growth, an iconic investment firm backing some of Israel’s top startups.”

The REWE investment is part of a bigger partnership between the two companies, which will begin with a new “grab and go” REWE store in Cologne. REWE has 3,700 stores across Germany, so there is a lot of scope there for expansion. REWE is Trigo’s second strategic investor: Tesco has also backed the startup and has been trialling its technology in the U.K.. Trigo’s also being used by Shufersal, a grocery chain in Israel.

REWE’s investment comes amid a spate of tech engagements by the grocery giant, which recently also announced a partnership with Flink, a new grocery delivery startup out of Germany that recently raised a big round of funding to expand. It’s also working with Yamo, a healthy eating startup; and Whisk, an AI powered buy-to-cook startup.

“With today’s rapid technological developments, it is crucial to find the right partners,” said Christoph Eltze, Executive Board Member Digital, Customer & Analytics REWE Group. “REWE Group is investing in its strategic partnership with Trigo, who we believe is one of the leading companies in computer vision technologies for smart stores.”

More generally, consumer habits are changing, fast. Whether we are talking about the average family, or the average individual, people are simply not shopping, cooking and eating in the same way that they were even 10 years ago, let alone 20 or 30 years ago.

And so like many others in the very established brick-and-mortar grocery business, REWE — founded in 1927 — is hoping to tie up with some of the more interesting innovators to better keep ahead in the game.

“I don’t actually think people really want grocery e-commerce,” Ran Peled, Trigo’s VP of marketing, told me back in 2019. “They do that because the supermarket experience has become worse with the years. We are very much committed to helping brick and mortar stores return to the time of a few decades ago, when it was fun to go to the supermarket. What would happen if a store could have an entirely new OS that is based on computer vision?”

It will be interesting to see how widely used and “fun” smart checkout services will become in that context, and whether it will be a winner-takes-all market, or whether we’ll see a proliferation of others emerge to provide similar tools.

In addition to Amazon and Trigo, there is also Standard Cognition, which earlier this year raised money at a $1 billion valuation, among others and other approaches. One thing that more competition could mean is also more competitive pricing for systems that otherwise could prove costly to implement and run except for in the busiest locations.

There is also a bigger question over what the optimal size will be for cashierless, grab-and-go technology. Trigo cites data from Juniper Research that forecasts $400 billion in smart checkout transactions annually by 2025, but it seems that the focus in that market will likely be, in Juniper’s view, on smaller grocery and convenience stores rather than the cavernous cathedrals to consumerism that many of these chains operate. In that category, the market size is 500,000 stores globally, 120,000 of them in Europe.

Enterprise AI platform Dataiku launches managed service for smaller companies

Dataiku is going downstream with a new product today called Dataiku Online. As the name suggests, Dataiku Online is a fully managed version of Dataiku. It lets you take advantage of the data science platform without going through a complicated setup process that involves a system administrator and your own infrastructure.

If you’re not familiar with Dataiku, the platform lets you turn raw data into advanced analytics, run some data visualization tasks, create data-backed dashboards and train machine learning models. In particular, Dataiku can be used by data scientists, but also business analysts and less technical people.

The company has been mostly focused on big enterprise clients. Right now, Dataiku has more than 400 customers, such as Unilever, Schlumberger, GE, BNP Paribas, Cisco, Merck and NXP Semiconductors.

There are two ways to use Dataiku. You can install the software solution on your own, own-premise servers. You can also run it on a cloud instance. With Dataiku Online, the startup offers a third option and takes care of setup and infrastructure for you.

“Customers using Dataiku Online get all the same features that our on-premises and cloud instances provide, so everything from data preparation and visualization to advanced data analytics and machine learning capabilities,” co-founder and CEO Florian Douetteau said. “We’re really focused on getting startups and SMBs on the platform — there’s a perception that small or early-stage companies don’t have the resources or technical expertise to get value from AI projects, but that’s simply not true. Even small teams that lack data scientists or specialty ML engineers can use our platform to do a lot of the technical heavy lifting, so they can focus on actually operationalizing AI in their business.”

Customers using Dataiku Online can take advantage of Dataiku’s pre-built connectors. For instance, you can connect your Dataiku instance with a cloud data warehouse, such as Snowflake Data Cloud, Amazon Redshift and Google BigQuery. You can also connect to a SQL database (MySQL, PostgreSQL…), or you can just run it on CSV files stored on Amazon S3.

And if you’re just getting started and you have to work on data ingestion, Dataiku works well with popular data ingestion services. “A typical stack for our Dataiku Online Customers involves leveraging data ingestion tools like FiveTran, Stitch or Alooma, that sync to a cloud data warehouse like Google BigQuery, Amazon Redshift or Snowflake. Dataiku fits nicely within their modern data stacks,” Douetteau said.

Dataiku Online is a nice offering to get started with Dataiku. High-growth startups might start with Dataiku Online as they tend to be short on staff and want to be up and running as quickly as possible. But as you become bigger, you could imagine switching to a cloud or on-premise installation of Dataiku. Employees can keep using the same platform as the company scales.

Emotion-detection software startup Affectiva acquired for $73.5M

Smart Eye, the publicly traded Swedish company that supplies driver monitoring systems for a dozen automakers, has acquired emotion-detection software startup Affectiva for $73.5 million in a cash-and-stock deal.

Affectiva, which spun out of the MIT Media Lab in 2009, has developed software that can detect and understand human emotion, which Smart Eye is keen to combine with its own AI-based eye-tracking technology. The companies’ founders see an opportunity to expand beyond driver monitoring systems — tech that is often used in conjunction with advanced driver assistance systems to track and measure awareness — and into the rest of the vehicle. Together, the technology could help them break into the emerging “interior sensing” market, which can be used to monitor the entire cabin of a vehicle and deliver services in response to the occupant’s emotional state.

Under the terms of the deal, $67.5 million will be paid with 2,354,668 new Smart Eye shares, of which 2,015,626 are to be issued upon closing of the transaction. The remaining 339,042 Smart Eye shares will be issued within two years of closing. About $6 million will be paid in cash once the deal closes in June 2021.

Affectiva and Smart Eye were competitors. A meeting at the technology trade show CES in 2020 put the two companies on a path to merge.

“Martin and I realized like, wow, we are on a path to compete with each other — and wouldn’t it be so much better if we joined forces?” Affective co-founder and CEO Dr. Rana el Kaliouby said in an interview Tuesday. “By joining forces, we kind of check all the boxes for what the OEMs are looking for with interior sensing, we leapfrog the competition and we have an opportunity to do this better and faster than we could have done it on our own.”

Boston-based Affectiva brings its emotion-detection software to the deal, which will allow Smart Eye to offer its existing automotive partners a variety of products. Smart Eye helps Affectiva move beyond the development and prototype work and into production contracts. Smart Eye has won 84 production contracts with 13 OEMs, including BMW and GM. Smart Eye, which has offices in Gothenburg, Detroit, Tokyo and Chongqing, China, also has a division that provides research organizations such as NASA with high-fidelity eye tracking systems for human factors research.

Smart Eye founder and CEO Martin Krantz said that European manufacturers building luxury and premium vehicles led the charge for driver monitoring systems.

“We see the same pattern repeating itself now for interior sensing,” Krantz said. “I think a large part of the early contracts will be European premium OEMs such as Mercedes, BMW, Audi, JLR, Porsche.” Krantz added that there are a number of other premium brands it will target in other regions, including Cadillac and Lexus.

The opportunity will initially be in passenger vehicles driven by humans and will eventually expand as greater levels of automated driving enter the market.

Affectiva, which employs 100 people at its offices in Boston and Cairo, also has another business unit that applies its emotio-detection software to media analytics. This division, which will be part of the deal and will operate separately, is profitable, Kaliouby said, noting the software is used by 70% of the world’s largest advertisers to measure and understand emotional responses to media content.

Mental health app Wysa raises $5.5M for ’emotionally intelligent’ AI

It’s hard enough to talk about your feelings to a person; Jo Aggarwal, the founder and CEO of Wysa, is hoping you’ll find it easier to confide in a robot. Or, put more specifically, “emotionally intelligent” artificial intelligence.

Wysa is an A.I powered mental health app designed by Touchkin eServices, Aggarwal’s company that currently maintains headquarters in Bangalore, Boston and London. Wysa is something like a chatbot that can respond with words of affirmation, or guide a user through one of 150 different therapeutic techniques.

Wysa is Aggarwal’s second venture. The first was an elder care company that failed to find market fit, she says. Aggarwal found herself falling into a deep depression, from which, she says, the idea of Wysa was born in 2016. 

In March, Wysa became one of 17 apps in the Google Assistant Investment Program, and in May, closed a Series A funding round of $5.5 million led by Boston’s W Health Ventures, the Google Assistant Investment Program, pi Ventures and Kae Capital. 

Wysa has raised a total of $9 million in funding, says Aggarwal, and the company has 60 full-time employees and about three million users. 

The ultimate goal, she says, is not to diagnose mental health conditions. Wysa is largely aimed at people who just want to vent. Most Wysa users are there to improve their sleep, anxiety or relationships, she says. 

“Out of the 3 million people that use Wysa, we find that only about 10% actually need a medical diagnosis,” says Aggarwal. If a user’s conversations with Wysa equate with high scores on traditional depression questionnaires like the PHQ-9 or the anxiety disorder questionnaire GAD-7 Wysa will suggest talking to a human therapist. 

Naturally, you don’t need to have a clinical mental health diagnosis to benefit from therapy. 

Wysa isn’t intended to be a replacement, says Aggarwal  (whether users view it as a replacement remains to be seen) but an additional tool that a user can interact with on a daily basis. 

“60 percent of the people who come and talk to Wysa need to feel heard and validated, but if they’re given techniques of self help, they can actually work on it themselves and feel better,” Aggarwal continues. 

Wysa’s approach has been refined through conversations with users and through input from therapists, says Aggarwal. 

For instance, while having a conversation with a user, Wysa will first categorize their statements and then assign a type of therapy, like cognitive behavioral therapy or acceptance and commitment therapy, based on those responses. It would then select a line of questioning or therapeutic technique written ahead of time by a therapist and begin to converse with the user. 

Wysa, says Aggarwal, has been gleaning its own insights from over 100 million conversations that have unfolded this way. 

“Take for instance a situation where you’re angry at somebody else. Originally our therapists would come up with a technique called the empty chair technique where you’re trying to look at it from the other person’s perspective. We found that when a person felt powerless or there were trust issues, like teens and parents, the techniques the therapists were giving weren’t actually working,” she says. 

“There are 10,000 people facing trust issues who are actually refusing to do the empty chair exercise. So we have to find another way of helping them. These insights have built Wysa.”

Although Wysa has been refined in the field, research institutions have played a role in Wysa’s ongoing development. Pediatricians at the University of Cincinnati helped develop a module specifically targeted towards COVID-19 anxiety. There are also ongoing studies of Wysa’s ability to help people cope with mental health consequences from chronic pain, arthritis, and diabetes at The Washington University in St. Louis, and The University of New Brunswick. 

Still, Wysa has had several tests in the real world. In 2020, the government of Singapore licensed Wysa, and provided the service for free to help cope with the emotional fallout of the coronavirus pandemic. Wysa is also offered through the health insurance company Aetna as a supplement to Aetna’s Employee Assistance Program. 

The biggest concern about mental health apps, naturally, is that they might accidentally trigger an incident, or mistake signs of self harm. To address this, the UK’s National Health Service (NHS) offers specific compliance standards. Wysa is compliant with the NHS’  DCB0129 standard for clinical safety, the first AI-based mental health app to earn the distinction. 

To meet those guidelines, Wysa appointed a clinical safety officer, and was required to create “escalation paths” for people who show signs of self harm.

Wysa, says Aggarwal, is also designed to flag responses to self-harm, abuse, suicidal thoughts or trauma. If a user’s responses fall into those categories Wysa will prompt the user to call a crisis line.

In the US, the Wysa app that anyone can download, says Aggarwal, fits the FDA’s definition of a general wellness app or a “low risk device.” That’s relevant because, during the pandemic, the FDA has created guidance to accelerate distribution of these apps. 

Still, Wysa may not perfectly categorize each person’s response. A 2018 BBC investigation, for instance, noted that the app didn’t appear to appreciate the severity of a proposed underage sexual encounter. Wysa responded by updating the app to handle more instances of coercive sex. 

Aggarwal also notes that Wysa contains a manual list of sentences, often containing slang, that they know the AI won’t catch or accurately categorize as harmful on its own. Those are manually updated to ensure that Wysa responds appropriately. “Our rule is that [the response] can be 80%, appropriate, but 0% triggering,” she says. 

In the immediate future, Aggarwal says the goal is to become a full-stack service. Rather than having to refer patients who do receive a diagnosis to Employee Assistant Programs (as the Aetna partnership might) or outside therapists, Wysa aims to build out its own network of mental health suppliers. 

On the tech side they’re planning expansion into Spanish, and will start investigating a voice-based system based on guidance from the Google Assistant Investment Fund. 

 

This startup says its AI can better spot a healthy embryo — and improve IVF success

With every year, AI is beginning to bring more standardized levels of diagnostic accuracy in medicine. This is true of skin cancer detection, for example, and lung cancers.

Now, a startup in Israel called Embryonics says its AI can improve the odds of successfully implanting a healthy embryo during in vitro fertilization. What the company has been developing, in essence, is an algorithm to predict embryo implantation probability, one they have trained through IVF time-lapsed imaging of developing embryos.

It’s just getting started, to be clear. So far, in a pilot involving 11 women ranging in age from 20 to 40, six of those individuals are enjoying successful pregnancies, and the other five are awaiting results, says Embryonics.

Still, Embryonics is interesting for its potential to shake up a big market that’s been stuck for decades and continues to grow only because of external trends, like millennial women who are putting off having children owing to economic concerns.

Consider that the global in-vitro fertilization market is expected to grow from roughly $18.3 billion to nearly double that number in the next five years by some estimates. Yet the tens of thousands of women who undergo IVF each year have long faced costs of anywhere from $10,000 to $15,000 per cycle (at least in the U.S.), along with long-shot odds that grow worse with age.

Indeed, it’s the prospect of reducing the number of IVF rounds and their attendant expenses that drives Embryonics, which was founded three years ago by CEO Yael Gold-Zamir, an M.D. who studied general surgery at Hebrew University, yet became a researcher in an IVF laboratory owing to an abiding interest in the science behind fertility.

As it happens, she would be introduced to two individuals with complementary interests and expertise. One of them was David Silver, who had studied bioinformatics at the prestigious Technion-Israel Institute of Technology and who, before joining Embryonics last year, spent three years as a machine learning engineer at Apple and three years before that as an algorithm engineer at Intel.

The second individual to whom Gold-Zamir was introduced was Alex Bronstein, a serial founder who spent years as a principal engineer with Intel and who is today the head of the Center for Intelligent Systems at Technion as well as involved with several efforts involving deep learning AI, including at Embryonics and at Sibylla AI, a nascent outfit focused on algorithmic trading in capital markets.

It’s a small outfit, but the three, along with 13 other full-time employees to join them, appear to be making progress.

Fueled in part by $4 million in seed funding led by the Shuctermann Family Investment Office (led by the former president of Soros Capital, Sender Cohen) and the Israeli Innovation Authority, Embryonics says it’s about to receive regulatory approval in Europe that will enable it to sell its software — which the team says can recognize patterns and interpret image in small cell clusters with greater accuracy than a human —  to fertility clinics across the continent.

Using a database with millions of (anonymized) patient records from different centers around the world that representing all races and geographies and ages, says Gold-Zamir, the company is already eyeing next steps, too.

Most notably, beyond analyzing which of several embryos is most likely to thrive, Embryonics wants to work with fertility clinics on improving what’s called hormonal stimulation, so that their patients produce as many mature eggs as possible.

As Bronstein explains it, every woman who goes through IVF or fertility preservation goes through an hormonal stimulation process — which involves getting injected with hormones from 8 to 14 days — to induce their ovaries to produce numerous eggs. But right now, there are just three general protocols and  a “lot of trial and error in trying to establish the right one,” he says.

Though deep learning, Embryonics thinks it can begin to understand not just which hormones each individual should be taking but the different times they should be taken.

In addition to embryo selection, Embryonics has developed a non-invasive genetic test based on analysis of visual information, together with clinical data, that in some cases can detect major chromosomal aberrations like down syndrome, says Gold-Zamir.

And there’s more in the works if all goes as planned. “Embryonics’s goal is to provide a holistic solution, covering all aspects of the process,” says Gold-Zamir, who volunteers that she is raising four children of her own, along with running the company.

It’s too soon to say whether the nascent outfit will succeed, naturally. But it certainly seems to be at the forefront of a technology that is fast changing after more than 40 years wherein many IVF clinics worldwide have simply assessed embryo health by looking at days-old embryos on a petri dish under a microscope to assess their cell multiplication and shape.

In the spring of 2019, for instance, investigators from Weill Cornell Medicine in New York City published own their conclusion  that AI can evaluate embryo morphology more accurately than the human eye after using 12,000 photos of human embryos taken precisely 110 hours after fertilization to train an algorithm to discriminate between poor and good embryo quality.

The investigators said that each embryo was first assigned a grade by embryologists that considered various aspects of the embryo’s appearance. The investigators then performed a statistical analysis to correlate the embryo grade with the probability of a successful pregnancy outcome. Embryos were considered good quality if the chances were greater than 58 percent and poor quality if the chances were below 35%.

After training and validation, the algorithm was able to classify the quality of a new set of images with 97% accuracy.

Photo Credit: Tammy Bar-Shay

DNA Testing for a Healthier Diet

Have you ever felt frustrated by diets? Have you ever struggled with weight loss? Have you been discouraged by eating “healthy” food that made you feel bloated or have low energy?

Most people have run into issues with their diet at some point. While it’s true that just about nobody does well on a diet full of sugar, fried and fatty snacks, and processed foods, it’s also true that everyone’s body is totally unique. The diet that helps your friend or your sister feel their best might not work for you at all.

We’re learning more and more about how the body works all the time. DNA testing has given us a window into our unique genetic code, empowering us to learn more about our similarities and differences.

Now, we have the ability to use DNA testing to discover which food choices work best for individual people. Here’s what you need to know about this exciting new breakthrough and why turning to your DNA could be the answer to a healthier life.

There Is No Such Thing As “One Diet Fits All”

dna testing help with weight loss

We’re all humans, and we all have similar nutritional needs. But that doesn’t mean we should all be eating the same things. Research is showing that there’s no such thing as a diet that works for absolutely everyone.

We all have different dietary needs. Even identical twins respond differently to fats, carbohydrates, proteins, and other components in food due to differences in habits, stress, sleep, exercise, and gut microbes. Basically, depending on how you live your life, you will have different nutritional needs!

With that said, genetics do play a role in how we respond to food. Nutrigenomics is a term for using genetic testing to help individuals tailor a healthy diet that works for their body. DNA testing can help to give you a framework for a diet that you can tinker with as needed.

The Positives of Aligning Your Diet to Your DNA

Genetic testing is used in many applications, from prenatal testing to genealogy to crop development. The more we know about the genes of an organism, the more we understand the way it develops and behaves. DNA testing doesn’t take all the guesswork out of creating a personalized diet, but it can help you understand more about your body and what it needs.

There are several benefits to aligning your diet to your DNA. First, you might be able to learn more about what types of food help or harm your body. You might be able to cut out a lot of trial and error in the process of designing a healthy diet that will keep you operating at your peak. And finally, you could get a head start on optimizing your current and future health and weight.

Can Artificial Intelligence Help You Make Smarter Food Choices?

Without advanced technology and data processing, nutrigenomics wouldn’t be possible. Big data and artificial intelligence (AI) can make connections and spot patterns that are difficult, time-consuming, or even impossible for human researchers.

Artificial intelligence already has a range of applications in healthcare. But could it also be used to help people make smarter food choices? We already use AI to aid in diagnostics by allowing these systems to interpret and analyze patient data to find connections and risk factors.

It’s not much of a stretch to think that one day, we could input a person’s health data into the system and have the AI analyze it and provide dietary recommendations. Since we know that lots of different lifestyle factors have an impact on these recommendations, AI could create customized plans much more quickly.

Benefits of a More Personalized Diet

dna testing helping with weight loss

Maintaining a healthy diet is challenging for most people. It becomes even harder if the latest fad diet simply doesn’t agree with your body chemistry. That’s where personalized diets can make a huge difference—they take into account lots of different factors and set people up for success. They also make sticking to a healthy diet easier and more enjoyable!

Getting your DNA tested won’t automatically make you healthier. You still have to do the work and learn how to stick with healthy choices. With that said, nutrigenomics can be a useful tool in your journey to better health. It can give you the information you need to make positive changes in your life. Just remember: not one single approach works for everyone!

The post DNA Testing for a Healthier Diet appeared first on Dumb Little Man.

Russian surveillance tech startup NtechLab nets $13M from sovereign wealth funds

NtechLab, a startup that helps analyze footage captured by Moscow’s 100,000 surveillance cameras, just closed an investment of more than 1RUB billion ($13 million) to further global expansion.

The five-year-old company sells software that recognizes faces, silhouettes and actions on videos. It’s able to do so on a vast scale in real time, allowing clients to react promptly to situations It’s a key “differentiator” of the company, co-founder Artem Kukharenko told TechCrunch.

“There could be systems which can process, for example, 100 cameras. When there are a lot of cameras in a city, [these systems] connect 100 cameras from one part of the city, then disconnect them and connect another hundred cameras in another part of the city, so it’s not so interesting,” he suggested.

The latest round, financed by Russia’s sovereign wealth fund, the Russian Direct Investment Fund, and an undisclosed sovereign wealth fund from the Middle East, certainly carries more strategic than financial importance. The company broke even last year with revenue reaching $8 million, three times the number from the previous year, ane expects to finish 2020 at a similar growth pace.

Nonetheless, the new round will enable the startup to develop new capabilities such as automatic detection of aggressive behavior and vehicle recognition as it seeks new customers in its key markets of the Middle East, Southeast Asia and Latin America. City contracts have a major revenue driver for the firm, but it has plans to woo non-government clients, such as those in the entertainment industry, finance, trade and hospitality.

The company currently boasts clients in 30 cities across 15 countries in the Commonwealth of Independent States (CIS) bloc, Middle East, Latin America, Southeast Asia and Europe.

These customers may procure from a variety of hardware vendors featuring different graphic processing units (GPUs) to carry out computer vision tasks. As such, NtechLab needs to ensure it’s constantly in tune with different GPU suppliers. Ten years ago, Nvidia was the go-to solution, recalled Kukharenko, but rivals such as Intel and Huawei have cropped up in recent times.

The Moscow-based startup began life as a consumer software that allowed users to find someone’s online profile by uploading a photo of the person. It later pivoted to video and has since attracted government clients keen to deploy facial recognition in law enforcement. For instance, during the COVID-19 pandemic, the Russian government uses NtechLab’s system to monitor large gatherings and implement access control.

Around the world, authorities have rushed to implement similar forms of public health monitoring and tracking for virus control. While these projects are usually well-meaning, they inspire a much-needed debate around privacy, discrimination, and other consequences brought by the scramble for large-scale data solutions. NtechLab’s view is that when used properly, video surveillance generally does more good than harm.

“If you can monitor people quite [effectively], you don’t need to close all people in the city… The problem is people who don’t respect the laws. When you can monitor these people and [impose] a penalty on them, you can control the situation better,” argued Alexander Kabakov, the other co-founder of the company.

As it expands globally, NtechLab inevitably comes across customers who misuse or abuse its algorithms. While it claimed to keep all customer data private and have no control over how its software is used, the company strives to “create a process that can be in compliance with local laws,” said Kukharenko.

“We vet our partners so we can trust them, and we know that they will not use our technology for bad purposes.”

The Industry of Mobile Gaming Explained

In its current state, the mobile gaming industry is worth billions. In fact, even games that are free to download are raking in millions by the month: Clash of Clans, Candy Crush Saga, and Fortnite, to name a few.

How so?

People can’t get enough of their games. Human engagement with mobile playing continues to rise, growing by 10% every 365 days. By the time 2021 comes, about 1 in 4 people will be considered an active gamer. And by such time, consumers will have spent $90 billion on mobile games.

The use of artificial intelligence (AI) and augmented reality is a major driver in captivating users to increase the rate of gameplay. Unbeknownst to most, this isn’t a new element in gaming.

In-game AI goes back to the days of Pac-Man. Using Pathfinding technology to plot the shortest path between two points, Mr. and Mrs. Pac-Man can quickly navigate through mazes. Today, developers use smart technology to enhance the personalization gaming provides. This was seen in Pokemon Go, Red Dead Redemption 2, and more. Let’s dig in.

See Also: How Augmented Reality Is Changing The Game

Since the beginning, gaming has fueled the necessity to advance data science. Although gaming and artificial intelligence (AI) have always gone hand-in-hand, AR or augmented reality was first brought into the mainstream by mobile games. The wow-factor AR presents is providing a more immersive gameplay experience.

mobile gaming pokemon go industry

The most popular AR games have earned billions. For example, Pokemon Go has reeled in over $2 billion to date. Jurassic World has a cumulative revenue of over $20.5 million. The Walking Dead: Our World has drawn in over $6.8 million.

All in all, 81% of today’s American gamers have played using AR technology at least more than once, and 53% play mobile AR games, such as those previously mentioned, routinely.

Reverting to the conversation of artificial intelligence as a driver for the business of mobile gaming, in-game AI features continue to improve. Let’s take Red Dead Redemption 2 for example. Its developers were able to define complex behaviors for Non-Player Characters (NPCs) by using its corresponding AI software.

As the industries of gaming and data science continue to mesh, user gameplay experience will be completely reinvented. OpenAI’s Universe Program is a great example of this, letting companies that develop self-driving car to train their AI algorithms by playing Grand Theft Auto.

Google’s DeepMind provides a similar advantage, capable of beating 99.8% of human players in StarCraft II. More commonly known, Microsoft’s Project Malmo uses Minecraft to test its AI ability to navigate the world and collaborate.

In the future, games will have AI-powered levels, open worlds, and graphics. Even more interesting, the growth of data science will allow games to be created from nothing and completely by artificial intelligence — without any human involvement. Graphics will be more realistic and based on real-world images, and levels will begin being based on developers’ game design.

Furthermore, Next Generation In-Game AI will present a new realm of self-learning that allows characters to learn and grow just like people. This same technology will also allow games to adapt to what each player likes, catering to individual user personalization. DeepMind is also producing flexible behaviors for their characters in simulated environments.

Today, 81% of digital gaming time is spent on mobile apps — reducing the demand for the heavier, more stationary console. As a result, mobile phone games are making millions. In 2019, Clash of Clans was the highest-earning gaming app in the iOS App Store, earning $1.54 million per day. This game isn’t the only high-baller, though. Homescapes earns $44 million per month, Candy Crush Saga earns $71 million per month, and Battlegrounds earns $31 million per month.

mobile gaming candy crush industry

To reach the point of futuristic gaming, developers will need to begin testing their AI software. Doing so can assist in checking their game’s speed and performance. It can also shorten the time needed to develop and release new games.

Furthermore, AI testing provides developers with a tool powerful enough to run through a whole game in no more that one hour’s time. It helps them discover things they didn’t know were inside the games they’ve developed.

The industry of mobile gaming is in the hands of artificial intelligence and vice versa. For more information about the future of gaming, check out the infographic below.

Gaming and Artificial Intelligence
Source: PinkLion.AI

The post The Industry of Mobile Gaming Explained appeared first on Dumb Little Man.

MIT researchers are working on AI-based knitting design software that will let anyone, even novices, make their own clothes

The growing popularity of 3D printing machines and companies like Thingiverse and Shapeways have given previously unimaginable powers to makers, enabling them to create everything from cosplay accessories to replacement parts. But even though 3D printing has created a new world of customized objects, most of us are still buying clothes off the rack. Now researchers at MIT are working on software that will allow anyone to customize or design their own knitwear, even if they have never picked up a ball of yarn.

A team of researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), led by computer scientist Alexandre Kaspar, released two new papers describing the software today. One is about a system called InverseKnit that automatically creates patterns from photos of knitted items. The other one introduces new design software, called CADKnit, that allows people with no knitting or design experience to quickly customize templates, adjusting the size, final shape and decorative details (like the gloves shown below).

The final patterns can be used with a knitting machine, which have been available to home knitters for years, but still require a fair amount of technical knowledge in order to design patterns for.

MIT knitting gloves2

Gloves made using CADknit

Both CADKnit and InverseKnit want to make designing and making machine-knitted garments as accessible as 3D printing is now. Once the software is commercialized, Kaspar envisions “knitting as a service” for consumers who want to order customized garments. It can also enable clothing designers to spend less time learning how to write knitwear patterns for machines and reduce waste in the prototyping and manufacturing process. Another target audience for the software are hand-knitters who want to try a new way of working with yarn.

“If you think about it like 3D printing, a lot of people have been using 3D printers or hacking 3D printers, so they are great potential users for our system, because they can do that with knitting,” says Kaspar.

One potential partner for CADKnit and InverseKnit is Kniterate, a company that makes a digital knitting machine for hobbyists, makerspaces and small businesses. Kaspar says he has been talking to Kniterate’s team about making knitwear customization more accessible.

To develop InverseKnit, researchers first created a dataset of knitting patterns with matching images that were used to train a deep neural network to generate machine knitting patterns. The team says that during InverseKnit’s testing, the system produced accurate instructions 94% of the time. There is still some work to do before InverseKnit can be commercialized. For example, the machine was tested using one specific type of acrylic yarn, so it needs to be trained to work with other fibers.

CADKnit, on the other hand, combines 2D images with CAD and photo-editing software to create customizable templates. It was tested with knitting newbies, who despite having little machine knitting experience were still able to create relatively complex garments like gloves and effects, including lace motifs and color patterns.

“3D printing took a while before people were comfortable enough to think they could do something with it,” says Kaspar. “It will be the same thing with what we do.”

Google will not bid for the Pentagon’s $10B cloud computing contract, citing its “AI Principles”

Google has dropped out of the running for JEDI, the massive Defense Department cloud computing contract potentially worth $10 billion. In a statement to Bloomberg, Google said that it decided not to participate in the bidding process, which ends this week, because the contract may not align with the company’s principles for how artificial intelligence should be used.

In statement to Bloomberg, Google spokesperson said “We are not bidding on the JEDI contract because first, we couldn’t be assured that it would align with our AI Principles. And second, we determined that there were portions of the contract that were out of scope with our current government certifications,” adding that Google is still “working to support the U.S. government with our cloud in many ways.”

Officially called Joint Enterprise Defense Infrastructure, bidding for the initiative’s contract began two months ago and closes this week. JEDI’s lead contender is widely considered to be Amazon, because it set up the CIA’s private cloud, but Oracle, Microsoft, and IBM are also expected to be in the running.

The winner of the contract, which could last for up to 10 years, is expected to be announced by the end of the year. The project is meant to accelerate the Defense Department’s adoption of cloud computing and services. Only one provider will be chosen, a controversial decision that the Pentagon defended by telling Congress that the pace of handling task orders in a multiple-award contract “could prevent DOD from rapidly delivering new capabilities and improved effectiveness to the warfighter that enterprise-level cloud computing can enable.”

Google also addressed the controversy over a single provider, telling Bloomberg that “had the JEDI contract been open to multiple vendors, we would have submitted a compelling solution for portions of it. Google Cloud believes that a multi-cloud approach is in the best interest of government agencies, because it allows them to choose the right cloud for the right workload.”

Google’s decision no to bid for JEDI comes four months after it reportedly decided not to renew its contract with the Pentagon for Project Maven, which involved working with the military to analyze drone footage, including images taken in conflict zones. Thousands of Google employees signed a petition against its work on Project Maven because they said it meant the company was directly involved in warfare. Afterward, Google came up with its “AI Principles,” a set of guidelines for how it will use its AI technology.

It is worth noting, however, that Google is still under employee fire because it is reportedly building a search engine for China that will comply with the government’s censorship laws, eight years after exiting the country for reasons including its limits on free speech.

Cootek, the Chinese maker of TouchPal keyboard, files for $100M US IPO

Cootek, the Chinese mobile internet company best known for keyboard app TouchPal, has filed for a public offering in the United States. In its F-1 form, submitted last week to the Securities and Exchange Commission, Cootek said it wants to raise up to $100 million.

The Shanghai-based company began operating in 2008, when TouchPal was launched, and incorporated as CooTek in March 2012. In its SEC filing, Cootek said it currently has 132 million daily active users, with average DAUs increasing 75% year-over-year as of June. It also said it achieved 453% total ad revenue growth in the six month period before June.

While AI-based TouchPal, which offers glide typing and predictive text, is Cootek’s most popular product, it also has 15 other apps in its portfolio, including fitness apps HiFit and ManFIT and a virtual assistant called Talia. The company uses its proprietary AI and big data technology to analyze language data collected from users and the Internet. Then it uses those insights to develop lifestyle, healthcare and entertainment apps. Together, those 15 apps reached an average of 22.2 million monthly average users and 7.3 million daily average users in June.

TouchPal itself had 125.4 million daily average users in June 2018, with active users launching the app an average of 72 times a day. It currently supports 110 languages.

Most of Cootek’s revenue comes from mobile advertising. It says net revenue grew from $11 million in 2016 to $37.3 million in 2017, or 238.5% year-over-year, while its net loss dropped from $30.7 million in 2016 to $23.7 million in 2017. It achieved net income of $3.5 million for the six months ending in June, compared to a net loss of $16.2 million in the same period a year ago.

Cootek plans to be listed under the ticker symbol CTK on the New York Stock Exchange and will use the IPO’s proceeds to grow its user base, invest in AI and natural language processing tech and improve advertising performance. The offering will be underwritten by underwritten by Credit Suisse, BofA Merrill Lync and Citi.

UK report warns DeepMind Health could gain ‘excessive monopoly power’

DeepMind’s foray into digital health services continues to raise concerns. The latest worries are voiced by a panel of external reviewers appointed by the Google-owned AI company to report on its operations after its initial data-sharing arrangements with the U.K.’s National Health Service (NHS) ran into a major public controversy in 2016.

The DeepMind Health Independent Reviewers’ 2018 report flags a series of risks and concerns, as they see it, including the potential for DeepMind Health to be able to “exert excessive monopoly power” as a result of the data access and streaming infrastructure that’s bundled with provision of the Streams app — and which, contractually, positions DeepMind as the access-controlling intermediary between the structured health data and any other third parties that might, in the future, want to offer their own digital assistance solutions to the Trust.

While the underlying FHIR (aka, fast healthcare interoperability resource) deployed by DeepMind for Streams uses an open API, the contract between the company and the Royal Free Trust funnels connections via DeepMind’s own servers, and prohibits connections to other FHIR servers. A commercial structure that seemingly works against the openness and interoperability DeepMind’s co-founder Mustafa Suleyman has claimed to support.

There are many examples in the IT arena where companies lock their customers into systems that are difficult to change or replace. Such arrangements are not in the interests of the public. And we do not want to see DeepMind Health putting itself in a position where clients, such as hospitals, find themselves forced to stay with DeepMind Health even if it is no longer financially or clinically sensible to do so; we want DeepMind Health to compete on quality and price, not by entrenching legacy position,” the reviewers write.

Though they point to DeepMind’s “stated commitment to interoperability of systems,” and “their adoption of the FHIR open API” as positive indications, writing: “This means that there is potential for many other SMEs to become involved, creating a diverse and innovative marketplace which works to the benefit of consumers, innovation and the economy.”

“We also note DeepMind Health’s intention to implement many of the features of Streams as modules which could be easily swapped, meaning that they will have to rely on being the best to stay in business,” they add. 

However, stated intentions and future potentials are clearly not the same as on-the-ground reality. And, as it stands, a technically interoperable app-delivery infrastructure is being encumbered by prohibitive clauses in a commercial contract — and by a lack of regulatory pushback against such behavior.

The reviewers also raise concerns about an ongoing lack of clarity around DeepMind Health’s business model — writing: “Given the current environment, and with no clarity about DeepMind Health’s business model, people are likely to suspect that there must be an undisclosed profit motive or a hidden agenda. We do not believe this to be the case, but would urge DeepMind Health to be transparent about their business model, and their ability to stick to that without being overridden by Alphabet. For once an idea of hidden agendas is fixed in people’s mind, it is hard to shift, no matter how much a company is motivated by the public good.”

We have had detailed conversations about DeepMind Health’s evolving thoughts in this area, and are aware that some of these questions have not yet been finalised. However, we would urge DeepMind Health to set out publicly what they are proposing,” they add. 

DeepMind has suggested it wants to build healthcare AIs that are capable of charging by results. But Streams does not involve any AI. The service is also being provided to NHS Trusts for free, at least for the first five years — raising the question of how exactly the Google-owned company intends to recoup its investment.

Google of course monetizes a large suite of free-at-the-point-of-use consumer products — such as the Android mobile operating system; its cloud email service Gmail; and the YouTube video sharing platform, to name three — by harvesting people’s personal data and using that information to inform its ad targeting platforms.

Hence the reviewers’ recommendation for DeepMind to set out its thinking on its business model to avoid its intentions vis-a-vis people’s medical data being viewed with suspicion.

The company’s historical modus operandi also underlines the potential monopoly risks if DeepMind is allowed to carve out a dominant platform position in digital healthcare provision — given how effectively its parent has been able to turn a free-for-OEMs mobile OS (Android) into global smartphone market OS dominance, for example.

So, while DeepMind only has a handful of contracts with NHS Trusts for the Streams app and delivery infrastructure at this stage, the reviewers’ concerns over the risk of the company gaining “excessive monopoly power” do not seem overblown.

They are also worried about DeepMind’s ongoing vagueness about how exactly it works with its parent Alphabet, and what data could ever be transferred to the ad giant — an inevitably queasy combination when stacked against DeepMind’s handling of people’s medical records.

“To what extent can DeepMind Health insulate itself against Alphabet instructing them in the future to do something which it has promised not to do today? Or, if DeepMind Health’s current management were to leave DeepMind Health, how much could a new CEO alter what has been agreed today?” they write.

“We appreciate that DeepMind Health would continue to be bound by the legal and regulatory framework, but much of our attention is on the steps that DeepMind Health have taken to take a more ethical stance than the law requires; could this all be ended? We encourage DeepMind Health to look at ways of entrenching its separation from Alphabet and DeepMind more robustly, so that it can have enduring force to the commitments it makes.”

Responding to the report’s publication on its website, DeepMind writes that it’s “developing our longer-term business model and roadmap.”

“Rather than charging for the early stages of our work, our first priority has been to prove that our technologies can help improve patient care and reduce costs. We believe that our business model should flow from the positive impact we create, and will continue to explore outcomes-based elements so that costs are at least in part related to the benefits we deliver,” it continues.

So it has nothing to say to defuse the reviewers’ concerns about making its intentions for monetizing health data plain — beyond deploying a few choice PR soundbites.

On its links with Alphabet, DeepMind also has little to say, writing only that: “We will explore further ways to ensure there is clarity about the binding legal frameworks that govern all our NHS partnerships.”

“Trusts remain in full control of the data at all times,” it adds. “We are legally and contractually bound to only using patient data under the instructions of our partners. We will continue to make our legal agreements with Trusts publicly available to allow scrutiny of this important point.”

“There is nothing in our legal agreements with our partners that prevents them from working with any other data processor, should they wish to seek the services of another provider,” it also claims in response to additional questions we put to it.

We hope that Streams can help unlock the next wave of innovation in the NHS. The infrastructure that powers Streams is built on state-of-the-art open and interoperable standards, known as FHIR. The FHIR standard is supported in the UK by NHS Digital, NHS England and the INTEROPen group. This should allow our partner trusts to work more easily with other developers, helping them bring many more new innovations to the clinical frontlines,” it adds in additional comments to us.

“Under our contractual agreements with relevant partner trusts, we have committed to building FHIR API infrastructure within the five year terms of the agreements.”

Asked about the progress it’s made on a technical audit infrastructure for verifying access to health data, which it announced last year, it reiterated the wording on its blog, saying: “We will remain vigilant about setting the highest possible standards of information governance. At the beginning of this year, we appointed a full time Information Governance Manager to oversee our use of data in all areas of our work. We are also continuing to build our Verifiable Data Audit and other tools to clearly show how we’re using data.”

So developments on that front look as slow as we expected.

The Google-owned U.K. AI company began its push into digital healthcare services in 2015, quietly signing an information-sharing arrangement with a London-based NHS Trust that gave it access to around 1.6 million people’s medical records for developing an alerts app for a condition called Acute Kidney Injury.

It also inked an MoU with the Trust where the pair set out their ambition to apply AI to NHS data sets. (They even went so far as to get ethical signs-off for an AI project — but have consistently claimed the Royal Free data was not fed to any AIs.)

However, the data-sharing collaboration ran into trouble in May 2016 when the scope of patient data being shared by the Royal Free with DeepMind was revealed (via investigative journalism, rather than by disclosures from the Trust or DeepMind).

None of the ~1.6 million people whose non-anonymized medical records had been passed to the Google-owned company had been informed or asked for their consent. And questions were raised about the legal basis for the data-sharing arrangement.

Last summer the U.K.’s privacy regulator concluded an investigation of the project — finding that the Royal Free NHS Trust had broken data protection rules during the app’s development.

Yet despite ethical questions and regulatory disquiet about the legality of the data sharing, the Streams project steamrollered on. And the Royal Free Trust went on to implement the app for use by clinicians in its hospitals, while DeepMind has also signed several additional contracts to deploy Streams to other NHS Trusts.

More recently, the law firm Linklaters completed an audit of the Royal Free Streams project, after being commissioned by the Trust as part of its settlement with the ICO. Though this audit only examined the current functioning of Streams. (There has been no historical audit of the lawfulness of people’s medical records being shared during the build and test phase of the project.)

Linklaters did recommend the Royal Free terminates its wider MoU with DeepMind — and the Trust has confirmed to us that it will be following the firm’s advice.

“The audit recommends we terminate the historic memorandum of understanding with DeepMind which was signed in January 2016. The MOU is no longer relevant to the partnership and we are in the process of terminating it,” a Royal Free spokesperson told us.

So DeepMind, probably the world’s most famous AI company, is in the curious position of being involved in providing digital healthcare services to U.K. hospitals that don’t actually involve any AI at all. (Though it does have some ongoing AI research projects with NHS Trusts too.)

In mid 2016, at the height of the Royal Free DeepMind data scandal — and in a bid to foster greater public trust — the company appointed the panel of external reviewers who have now produced their second report looking at how the division is operating.

And it’s fair to say that much has happened in the tech industry since the panel was appointed to further undermine public trust in tech platforms and algorithmic promises — including the ICO’s finding that the initial data-sharing arrangement between the Royal Free and DeepMind broke U.K. privacy laws.

The eight members of the panel for the 2018 report are: Martin Bromiley OBE; Elisabeth Buggins CBE; Eileen Burbidge MBE; Richard Horton; Dr. Julian Huppert; Professor Donal O’Donoghue; Matthew Taylor; and Professor Sir John Tooke.

In their latest report the external reviewers warn that the public’s view of tech giants has “shifted substantially” versus where it was even a year ago — asserting that “issues of privacy in a digital age are if anything, of greater concern.”

At the same time politicians are also gazing rather more critically on the works and social impacts of tech giants.

Although the U.K. government has also been keen to position itself as a supporter of AI, providing public funds for the sector and, in its Industrial Strategy white paper, identifying AI and data as one of four so-called “Grand Challenges” where it believes the U.K. can “lead the world for years to come” — including specifically name-checking DeepMind as one of a handful of leading-edge homegrown AI businesses for the country to be proud of.

Still, questions over how to manage and regulate public sector data and AI deployments — especially in highly sensitive areas such as healthcare — remain to be clearly addressed by the government.

Meanwhile, the encroaching ingress of digital technologies into the healthcare space — even when the techs don’t even involve any AI — are already presenting major challenges by putting pressure on existing information governance rules and structures, and raising the specter of monopolistic risk.

Asked whether it offers any guidance to NHS Trusts around digital assistance for clinicians, including specifically whether it requires multiple options be offered by different providers, the NHS’ digital services provider, NHS Digital, referred our question on to the Department of Health (DoH), saying it’s a matter of health policy.

The DoH in turn referred the question to NHS England, the executive non-departmental body which commissions contracts and sets priorities and directions for the health service in England.

And at the time of writing, we’re still waiting for a response from the steering body.

Ultimately it looks like it will be up to the health service to put in place a clear and robust structure for AI and digital decision services that fosters competition by design by baking in a requirement for Trusts to support multiple independent options when procuring apps and services.

Without that important check and balance, the risk is that platform dynamics will quickly dominate and control the emergent digital health assistance space — just as big tech has dominated consumer tech.

But publicly funded healthcare decisions and data sets should not simply be handed to the single market-dominating entity that’s willing and able to burn the most resource to own the space.

Nor should government stand by and do nothing when there’s a clear risk that a vital area of digital innovation is at risk of being closed down by a tech giant muscling in and positioning itself as a gatekeeper before others have had a chance to show what their ideas are made of, and before even a market has had the chance to form. 

The new AI-powered Google News app is now available for iOS

Google teased a new version of its News app with AI smarts at its I/O event last week, and today that revamped app landed for iOS and Android devices in 127 countries. The redesigned app replaces the previous Google Play Newsstand app.

The idea is to make finding and consuming news easier than ever, whilst providing an experience that’s customized to each reader and supportive of media publications. The AI element is designed to learn from what you read to help serve you a better selection of content over time, while the app is presented with a clear and clean layout.

Opening the app brings up the tailored ‘For You’ tab which acts as a quick briefing, serving up the top five stories “of the moment” and a tailored selection of opinion articles and longer reads below it.

The next section — ‘Headlines’ — dives more deeply into the latest news, covering global, U.S., business, technology, entertainment, sports, science and health segments. Clicking a story pulls up ‘Full Coverage’ mode, which surfaces a range of content around a topic including editorial and opinion pieces, tweets, videos and a timeline of events.

 

Favorites is a tab that allows customization set by the user — without AI. It works as you’d imagine, letting you mark out preferred topics, news sources and locations to filter your reads. There’s also an option for saved searches and stories which can be quickly summoned.

The final section is ‘Newsstand’ which, as the name suggests aggregates media. Google said last week that it plans to offer over 1,0000 magazine titles you can follow by tapping a star icon or subscribing to. It currently looks a little sparse without specific magazine titles, but we expect that’ll come soon.

As part of that, another feature coming soon is “Subscribe with Google, which lets publications offer subscription-based content. The process of subscribing will use a user’s Google account, and the payment information they already have on file. Then, the paid content becomes available across Google platforms, including Google News, Google Search and publishers’ own websites.

Facebook has a new job posting calling for chip designers

Facebook has posted a job opening looking for an expert in ASIC and FPGA, two custom silicon designs that companies can gear toward specific use cases — particularly in machine learning and artificial intelligence.

There’s been a lot of speculation in the valley as to what Facebook’s interpretation of custom silicon might be, especially as it looks to optimize its machine learning tools — something that CEO Mark Zuckerberg referred to as a potential solution for identifying misinformation on Facebook using AI. The whispers of Facebook’s customized hardware range depending on who you talk to, but generally center around operating on the massive graph Facebook possesses around personal data. Most in the industry speculate that it’s being optimized for Caffe2, an AI infrastructure deployed at Facebook, that would help it tackle those kinds of complex problems.

FPGA is designed to be a more flexible and modular design, which is being championed by Intel as a way to offer the ability to adapt to a changing machine learning-driven landscape. The downside that’s commonly cited when referring to FPGA is that it is a niche piece of hardware that is complex to calibrate and modify, as well as expensive, making it less of a cover-all solution for machine learning projects. ASIC is similarly a customized piece of silicon that a company can gear toward something specific, like mining cryptocurrency.

Facebook’s director of AI research tweeted about the job posting this morning, noting that he previously worked in chip design:

Interested in designing ASIC & FPGA for AI?
Design engineer positions are available at Facebook in Menlo Park.

I used to be a chip designer many moons ago: my engineering diploma was in Electrical… https://t.co/D4l9kLpIlV

Yann LeCun (@ylecun) April 18, 2018

While the whispers grow louder and louder about Facebook’s potential hardware efforts, this does seem to serve as at least another partial data point that the company is looking to dive deep into custom hardware to deal with its AI problems. That would mostly exist on the server side, though Facebook is looking into other devices like a smart speaker. Given the immense amount of data Facebook has, it would make sense that the company would look into customized hardware rather than use off-the-shelf components like those from Nvidia.

(The wildest rumor we’ve heard about Facebook’s approach is that it’s a diurnal system, flipping between machine training and inference depending on the time of day and whether people are, well, asleep in that region.)

Most of the other large players have found themselves looking into their own customized hardware. Google has its TPU for its own operations, while Amazon is also reportedly working on chips for both training and inference. Apple, too, is reportedly working on its own silicon, which could potentially rip Intel out of its line of computers. Microsoft is also diving into FPGA as a potential approach for machine learning problems.

Still, that it’s looking into ASIC and FPGA does seem to be just that — dipping toes into the water for FPGA and ASIC. Nvidia has a lot of control over the AI space with its GPU technology, which it can optimize for popular AI frameworks like TensorFlow. And there are also a large number of very well-funded startups exploring customized AI hardware, including Cerebras Systems, SambaNova Systems, Mythic, and Graphcore (and that isn’t even getting into the large amount of activity coming out of China). So there are, to be sure, a lot of different interpretations as to what this looks like.

One significant problem Facebook may face is that this job opening may just sit up in perpetuity. Another common criticism of FPGA as a solution is that it is hard to find developers that specialize in FPGA. While these kinds of problems are becoming much more interesting, it’s not clear if this is more of an experiment than Facebook’s full all-in on custom hardware for its operations.

But nonetheless, this seems like more confirmation of Facebook’s custom hardware ambitions, and another piece of validation that Facebook’s data set is becoming so increasingly large that if it hopes to tackle complex AI problems like misinformation, it’s going to have to figure out how to create some kind of specialized hardware to actually deal with it.

A representative from Facebook did not yet return a request for comment.

5 Top Technology Trends That Will Shape 2018

Groundbreaking steps are happening in the technology industry all the time. In the past couple of years alone, leaps and bounds have resulted in better augmented reality, virtual reality, artificial intelligence, speech recognition, and more. This year, you can expect more tech innovations to enhance our lives.

Here are the top technology trends you need to watch out for this year.

Augmented Reality

augmented reality vs virtual reality

We’ve already seen what today’s AR mobile apps can do and games like Pokémon Go are a far cry from it, to say the least.

The AR technology from companies like the startup DAQRI, however, extends well beyond a mobile game experience. Its technology is found in other products, like its $15,000 AR helmets.

You can wear those helmets entirely hands-free and it can work for hours. A requirement for its primary users: industrial workers, sailors, and soldiers.

Take note that these headsets aren’t quite full-scale AR because they suffer from the common problem of most AR headsets. It’s the narrow rectangular view that cuts off images when the user moves.

Now, as the company has partnered with Two Trees, a holography specialist, and is working on developing new dynamic holography technology, it could possibly revolutionize AR.

See Also: How Augmented Reality Is Changing The Game

Google RankBrain

One of the biggest contributors to the advancement of search algorithms in recent years has been Google’s machine learning Artificial Intelligence (AI) system, RankBrain.

Since its inception over two years ago, Google continues to embrace RankBrain, using it to return the best results that match a Google user’s query. It has gone from being used in 15% of Google search queries to now being used in all of them.

Google has also been dabbling in other AI interests. This includes the development of a Cloud Vision API, which has the capability to recognize a huge number of objects. Plus, its Google Brain division has reportedly been developing an AI that can build AI better than humans can.

Artificial Intelligence

Google RankBrain aside, artificial intelligence isn’t only found in data. It’s in just about every industry. Journalism, financial services, video gaming, gambling, automotive, the military, and even healthcare are just some examples.

Currently, the vast majority of AI systems function as a supportive tool that can make certain processes more simplified, effective, and faster.

That said, as AI enters more and more fields, people like Jack Ma question what impact the future of AI will have on society as a whole. The concerns of Alibaba’s founder and Executive Chairman were made known earlier this year during the World Economic Forum (WEF) at Davos.

“The AI, Big Data is a threat to human beings. The AI and robots are going to kill a lot of jobs because, in the future, these will be done by machines,” Ma stated during a discussion panel.

He believes that AI should be used to support people and added that tech giants, like Alibaba, Amazon, and Facebook, need to be responsible and “should spend money on technology that enables people, empowers people, and makes life better”.

See Also: 5 Reasons Why You Should Consider AI Automation for Small Business

Smart speakers

Forget about talking to your smartphone. The future is all about voice-controlled smart speakers now.

This technology perfectly fits into the ecosystem of a smart home as smart speakers can function as the main control hub. They can answer questions, set timers, play music, and control other devices at home.

As you might imagine, there is a fierce competition between market leaders. Amazon, Google, Apple, and Microsoft aim to develop and sell the most sought-after smart speakers.

Today, the competition is tight between Amazon and Google. These companies are leading the market with smart speakers that are affordable, accessible, and superior to Apple’s Siri.

And once Apple’s smart speaker (HomePod) is out later this year, fans will still flock to buy it despite the high price tag. That only reflects people’s desire to always get their hands on the latest technology.

apple homepad
Via Dezeen

Speech recognition

Speech recognition is another tech that has recently advanced both in its capabilities and its use. Whether you’re asking your smartphone a question or your smart speaker, speech recognition is at play.

While there has always been a lot of kinks to work out when it comes to this technology, last August, Microsoft claimed a new speech recognition record. It was able to reduce its error rate to an amazing 5.1%.

This percentage matched the error rate of multiple human transcribers in a well-known accuracy test.

Microsoft’s continued improvements in speech recognition technology are a part of its wider effort to advance state of the art AI and bring these new innovations to the market.

Conclusion

Of course, no one really knows what the future holds for these top technology trends. Maybe full-scale AR will eliminate the need for mobile phones. Perhaps, speech recognition may prove to be superior to human transcribers- or maybe not. Only time will tell where these innovations might take us next

The post 5 Top Technology Trends That Will Shape 2018 appeared first on Dumb Little Man.

Google declares war against Alexa and Siri at CES 2018

TwitterFacebook

It’s an artificial intelligence showdown.

This year at CES, the world’s largest electronics trade show (running Jan. 9-12), thousands of companies will travel to Las Vegas to show off their newest products and build new partnerships. But this time around, one unusual exhibitor stands out from the rest: Google.

It’s the first time in many years that Google will have its own, large, standalone booth in the middle of the convention center. But the search giant has gone far beyond buying space on the showroom floor. It’s also commissioned several large advertisements around the city, including one you simply can’t miss. Read more…

More about Google, Amazon, Ces, Artificial Intelligence, and Ai

5 Reasons Why You Should Consider AI Automation for Small Business

If you are aware of the developments in technology, then you have probably heard about Artificial Intelligence (AI). For a lot of people, it’s too complex or high-tech so they don’t really pay a lot of attention to it.

In fact, even small businesses don’t think much of AI. They believe that only big tech companies like Apple and Google can utilize it. However, that’s not true.

AI has numerous benefits for small businesses. It’s something you cannot ignore if you want to stay ahead of your competition.

Today, open-minded businesses have started using AI to create a business logo, respond to emails, comb the Internet for leads identification, help customers with chatbots and a lot more.

If you have not seriously considered AI automation for your business yet, then the following 5 reasons can surely convince you.

Enhanced Bookkeeping

There are plenty of AI tools designed specifically for bookkeeping that you can use. While many offer help with the basic data entry tasks, some are more advanced and can easily perform many roles. You can use it in reading and preparing invoices, set invoice reminders, release payments on schedule and more.

So, instead of expanding your accounting department, you can invest in an AI bookkeeping program which is more affordable and highly useful.

Lead Nurturing

No matter how skilled your sales reps are, they will always have limitations.

For starters, one can handle only a certain number of leads at a time. Secondly, they need a certain amount of time with every lead to learn about their personality, pain points, opportunities for connection building and many other aspects that are required for their nurturing.

However, with AI automation you can take the entire process to the next level and benefit from increased productivity.

An AI program designed for lead nurturing can read and respond to the emails of your prospects using a list of set messages and the Natural Language Processing technology. It can also go through the past conversations you have had with your leads to pinpoint important bits of information. Most importantly, it can work 24/7 since it can function without human intervention.

Online Customer Support

A number of studies have found that customers find it more comfortable to inquire about the services or products of a company through messaging, especially via online chat as opposed to voice calls. However, hiring a full-time customer executive can be expensive for a small business. Again, this is where AI automation can be a great option.

chatbots

Not only AI-powered chatbots are highly popular today, you can also find some highly affordable options easily. You can install one of these on your website so that when your customers need some information, it will be readily available. In addition to that, having AI-powered chatbots can also make your website more attractive.

See Also: How To Boost Your Business with Influence Marketing Chatbots

Cheap but Quality Branding

Usually, it’s hard to find quality and affordability at the same place. However, AI automation seems to have changed that.

This is because it can help you with your company branding in many ways and at modest pricing. There are companies like Tailor Brands that offer an entire suite of branding tools- from logo creation to social media banner creation at a fraction of the price that you would pay to a graphics designer.

Intelligent Personal Assistants

You are probably already familiar with virtual personal assistants, such as Apple’s Siri or Microsoft Corona. Today, a new range of similar assistants is emerging and they are even more intelligent and more suitable for businesses. For instance, there is Amy from x.ai that can arrange meetings for you or Pana that can arrange your travel.

amy intelligent personal assistant
Via slideshare.net

AI has matured enough today that it can be utilized in different ways in every industry. It’s now easily available at affordable costs, too. If you haven’t considered using one for your business, it’s probably the right time to reconsider your strategy.

 

The post 5 Reasons Why You Should Consider AI Automation for Small Business appeared first on Dumb Little Man.

Powered by WPeMatico

Crunch Report | David Letterman Is Coming to Netflix

David Letterman is coming to Netflix, Didi Chuxing backs Careem in the Middle East, Crusie is running an autonomous ride-hailing service and Andrew Ng launches Deeplearning.ai. All this on Crunch Report. Read More

Powered by WPeMatico

TrueFace.AI busts facial recognition imposters

TwitterFacebook

Facial recognition technology is more prevalent than ever before. It’s being used to identify people in airports, put a stop to child sex trafficking, and shame jaywalkers

But the technology isn’t perfect. One major flaw: It sometimes can’t tell the difference between a living person’s face and a photo of that person held up in front of a scanner. 

TrueFace.AI facial recognition is trying to fix that flaw. Launched on Product Hunt in June, it’s meant to detect “picture attacks.”

The company originally created Chui in 2014 to work with customized smart homes. Then they realized clients were using it more for security purposes, and TrueFace.AI was born.  Read more…

More about Tech, Security, Artificial Intelligence, Innovation, and Ai

Powered by WPeMatico

Cognitiv+ is using AI for contract analysis and tracking

 Another legal tech startup coming out of the UK: Cognitiv+ is applying artificial intelligence to automate contract analysis and management, offering businesses a way to automate staying on top of legal risks, obligations and changing regulatory landscapes. Read More

Powered by WPeMatico

Superintelligent AI explains Softbank’s push to raise a $100BN Vision Fund

masayoshi son Anyone who’s seen Softbank CEO Masayoshi Son give a keynote speech will know he rarely sticks to the standard industry conference playbook. And his turn on the stage at Mobile World Congress this morning was no different, with Son making like Eldon Tyrell and telling delegates about his personal belief in a looming computing Singularity… Read More

Powered by WPeMatico

Baidu furthers AI push with acquisition of digital assistant startup Raven Tech

baidu Baidu is furthering its push into artificial intelligence after it announced the acquisition of Raven Tech, a Chinese startup that developed an AI voice assistant platform. Baidu confirmed it has bought the startup’s tech, product and staff of 60. The deal comes a month after Baidu hired noted AI expert Qi Lu, formerly with Microsoft, as its COO and Group President.… Read More

Powered by WPeMatico

Crunch Report | ACLU Enrolls in Y Combinator

We are joined by the CEO of Product Hunt, which was just acquired by AngelList, the ACLU enrolls in Y Combinator, Daimler builds a self-driving car for Uber and top poker players lose to an AI developed by Carnegie Mellon. All this on Crunch Report. Read More

Powered by WPeMatico

Crunch Report | Apple Suing Qualcomm for $1 Billion

Apple is suing Qualcomm for $1 billion, the hit app Meitu may be collecting too much data, Whitehouse.gov removes LGBT, climate change and more and Kristen Stewart appeared as a co-author on an AI paper. All this on Crunch Report. Read More

Powered by WPeMatico

Lobster nets £1M to scale its user-generated content licensing marketplace

lobster U.K. startup Lobster is gearing up to scale its user-generated content licensing marketplace, as it closes a £1 million Series A. It’s expecting to have closed out the round next week, with 85 per cent of the funding committed at this point and only its decision on the last few investors outstanding. Read More

Powered by WPeMatico

Here's why those tech billionaires are throwing millions at ethical AI

Https%3a%2f%2fblueprint-api-production.s3.amazonaws.com%2fuploads%2fcard%2fimage%2f354610%2f50777290-5145-4843-825c-4de7890062ef

Feed-twFeed-fb

Worried about a dystopian future in which AI rule the world and humans are enslaved to autonomous technology? You’re not alone. So are billionaires (kind of).

First it was the Partnership on AI formed by Google, Amazon, Microsoft, Facebook and IBM. 

Then came Elon Musk and Peter Thiel’s recent investment in $1 billion research body, OpenAI. 

Now, a new batch of tech founders are throwing money at ethical artificial intelligence (AI) and autonomous systems (AS). And experts say it couldn’t come sooner. 

LinkedIn founder, Reid Hoffman, and eBay founder, Pierre Omidyar (through his philanthropic investment fund) donated a combined $20 million to the Ethics and Governance of Artificial Intelligence Fund on Jan. 11 — helping ensure the future’s more “man and machine, not man versus machine,” as IBM CEO Ginny Rometty put it to WSJ Thursday. Read more…

More about Ebay, Linkedin, Funding, Ethics, and Ai

Powered by WPeMatico

Crunch Report | Nintendo Switch Hits the Market on March 3

Nintendo Switch to hit the market on March 3, San Francisco District Attorney brings lawsuit against Lily, Moon Express is going to the Moon and Microsoft buys AI startup Maluuba. All this on Crunch Report. Read More

Powered by WPeMatico