Artificial Intelligence

Auto Added by WPeMatico

Not hog dog? PixFood lets you shoot and identify food

What happens when you add AI to food? Surprisingly, you don’t get a hungry robot. Instead you get something like PixFood. PixFood lets you take pictures of food, identify available ingredients, and, at this stage, find out recipes you can make from your larder.

It is privately funded.

“There are tons of recipe apps out there, but all they give you is, well, recipes,” said Tonnesson. “On the other hand, PixFood has the ability to help users get the right recipe for them at that particular moment. There are apps that cover some of the mentioned, but it’s still an exhausting process – since you have to fill in a 50-question quiz so it can understand what you like.”

They launched in August and currently have 3,000 monthly active users from 10,000 downloads. They’re working on perfecting the system for their first users.

“PixFood is AI-driven food app with advanced photo recognition. The user experience is quite simple: it all starts with users taking a photo of any ingredient they would like to cook with, in the kitchen or in the supermarket,” said Tonnesson. “Why did we do it like this? Because it’s personalized. After you take a photo, the app instantly sends you tailored recipe suggestions! At first, they are more or le

ss the same for everyone, but as you continue using it, it starts to learn what you precisely like, by connecting patterns and taking into consideration different behaviors.”

In my rudimentary tests the AI worked acceptably well and did not encourage me to eat a monkey. While the app begs the obvious question – why not just type in “corn?” – it’s an interesting use of vision technology that is definitely a step in the right direction.

 

Tonnesson expects the AI to start connecting you with other players in the food space, allowing you to order corn (but not a monkey) from a number of providers.

“Users should also expect partnerships with restaurants, grocery, meal-kit, and other food delivery services will be part of the future experiences,” he said.

Robotics-as-a-service is on the way and inVia Robotics is leading the charge

The team at inVia Robotics didn’t start out looking to build a business that would create a new kind of model for selling robotics to the masses, but that may be exactly what they’ve done.

After their graduation from the University of Southern California’s robotics program, Lior Alazary, Dan Parks, and Randolph Voorhies, were casting around for ideas that could get traction quickly.

“Our goal was to get something up and running that could make economic sense immediately,’ Voorhies, the company’s chief technology officer, said in an interview.

The key was to learn from the lessons of what the team had seen as the missteps of past robotics manufacturers.

Despite the early success of iRobot, consumer facing or collaborative robots that could operate alongside people had yet to gain traction in wider markets.

Willow Garage, the legendary company formed by some of the top names in the robotics industry had shuttered just as Voorhies and his compatriots were graduating, and Boston Dynamics, another of the biggest names in robotics research, was bought by Google around the same time — capping an six-month buying spree that saw the search giant acquire eight robotics companies.

In the midst of all this we were looking around and we said, ‘God there were a lot of failed robotics companies!’ and we asked ourselves why did that happen?” Voorhies recalled. “A lot of the hardware companies that we’d seen, their plan was: step one build a really cool robot and step three: an app ecosystem will evolve and people will write apps and the robot will sell like crazy. And nobody had realized how to do step 2, which was commercialize the robot.”

So the three co-founders looked for ideas they could take to market quickly.

The thought was building a robot that could help with mobility and reaching for objects. “We built a six-degree-of-freedom arm with a mobile base,” Voorhies said.

However, the arm was tricky to build, components were expensive and there were too many variables in the environment for things to go wrong with the robot’s operations. Ultimately the team at inVia realized that the big successes in robotics were happening in controlled environments. 

“We very quickly realized that the environment is too unpredictable and there were too many different kinds of things that we needed to do,” he said. 

Parks then put together a white paper analyzing the different controlled environments where collaborative robots could be most easily deployed. The warehouse was the obvious choice.

Back in March of 2012 Amazon had come to the same conclusion and acquired Kiva Systems in a $775 million deal that brought Kiva’s army of robots to Amazon warehouses and distribution centers around the world.

“Dan put a white paper together for Lior and I,” Voorhies said, “and the thing really stuck out was eCommerce logistics. Floors tend to be concrete slabs; they’re very flat with very little grade, and in general people are picking things off a shelf and putting them somewhere else.”

With the idea in place, the team, which included technologists Voorhies and Parks, and Lazary, a serial entrepreneur who had already exited from two businesses, just needed to get a working prototype together.

Most warehouses and shipping facilities that weren’t Amazon were using automated storage and retrieval systems, Voorhies said. These big, automated systems that looked and worked like massive vending machines. But those systems, he said, involved a lot of sunk costs, and weren’t flexible or adaptable.

And those old systems weren’t built for random access patterns and multi-use orders which comprise most of the shipping and packing that are done as eCommerce takes off.

With those sunk costs though, warehouses are reluctant to change the model. The innovation that Voorhies and his team came up with, was that the logistics providers wouldn’t have to.

“We didn’t like the upfront investment, not just to install one but just to start a company to build those things,” said Voorhies. “We wanted something we could bootstrap ourselves and grow very organically and just see wins very very quickly. So we looked at those ASRS systems and said why don’t we build mobile robots to do this.”

In the beginning, the team at inVia played with different ways to build the robot.l first there was a robot that could carry several different objects and another that would be responsible for picking.

The form factor that the company eventually decided on was a movable puck shaped base with a scissor lift that can move a platform up and down. Attached to the back of the platform is a robotic arm that can extend forward and backward and has a suction pump attached to its end. The suction pump drags boxes onto a platform that are then taken to a pick and pack employee.

We were originally going to grab individual product.s. Once we started talking to real warehouses more and more we realized that everyone stores everything in these boxes anyway,” said Voorhies. “And we said why don’t we make our lives way easier, why don’t we just grab those totes?” 

Since bootstrapping that initial robot, inVia has gone on to raise $29 million in financing to support its vision. Most recently with a $20 million round which closed in July.

“E-commerce industry growth is driving the need for more warehouse automation to fulfill demand, and AI-driven robots can deliver that automation with the flexibility to scale across varied workflows. Our investment in inVia Robotics reflects our conviction in AI as a key enabler for the supply chain industry,” said Daniel Gwak, Co-Head, AI Investments at Point72 Ventures, the early stage investment firm formed by the famed hedge fund manager, Steven Cohen.

Given the pressures on shipping and logistics companies, it’s no surprise that the robotics and automation are becoming critically important strategic investments, or that venture capital is flooding int the market. In the past two months alone, robotics companies targeting warehouse and retail automation have raised nearly $70 million in new financing. They include the recent raised $17.7 million for the French startup Exotec Solutions and Bossa Nova’s $29 million round for its grocery store robots.

Then there are warehouse-focused robotics companies like Fetch Robotics, which traces its lineage back to Willow Garage and Locus Robotics, which is linked to the logistics services company Quiet Logistics.

“Funding in robotics has been incredible over the past several years, and for good reason,” said John Santagate, Research Director for Commercial Service Robotics at Research and Analysis Firm IDC, in a statement. “The growth in funding is a function of a market that has become accepting of the technology, a technology area that has matured to meet market demands, and vision of the future that must include flexible automation technology. Products must move faster and more efficiently through the warehouse today to keep up with consumer demand and autonomous mobile robots offer a cost-effective way to deploy automation to enable speed, efficiency, and flexibility.”

The team at inVia realized it wasn’t enough to sell the robots. To give warehouses a full sense of the potential cost savings they could have with inVia’s robots, they’d need to take a page from the software playbook. Rather than selling the equipment, they’d sell the work the robots were doing as a service.

“Customers will ask us how much the robots cost and that’s sort of irrelevant,” says Voorhies. “We don’t want customers to think about those things at all.”

Contracts between inVia and logistics companies are based on the unit of work done, Voorhies said. “We charge on the order line,” says Voorhies. “An order line is a single [stock keeping unit] that somebody would order regardless of quantity… We’re essentially charging them every time a robot has to bring a tote and present it in front of a person. The faster we’re able to do that and the less robots we can use to present an item the better our margins are.”

It may not sound like a huge change, but those kinds of efficiencies matter in warehouses, Voorhies said. “If you’re a person pushing a cart in a warehouse that cart can have 35 pallets on it. With us, that person is standing still, and they’re really not limited to a single cart. They are able to fill 70 orders at the same time rather than 55,” he said.

At Rakuten logistics, the deployment of inVia’s robots are already yielding returns, according to Michael Manzione, the chief executive officer of Rakuten Super Logistics.

“Really [robotics] being used in a fulfillment center is pretty new,” said Manzione in an interview. “We started looking at the product in late February and went live in late March.”

For Manzione, the big selling point was scaling the robots quickly, with no upfront cost. “The bottom line is ging to be effective when we see planning around the holiday season,” said Manzione. “We’re not planning on bringing in additional people, versus last year when we doubled our labor.”

As Voorhies notes, training a team to work effectively in a warehouse environment isn’t easy.

The big problem is that it’s really hard to hire extra people to do this. In a warehouse there’s a dedicated core team that really kicks ass and they’re really happy with those pickers and they will be happy with what they get from whatever those people can sweat out in a shift,” Voorhies said. “Once you need to push your throughput beyond what your core team can do it’s hard to find people who can do that job well.” 

SessionM customer loyalty data aggregator snags $23.8 M investment

SessionM announced a $23.8 million Series E investment led by Salesforce Ventures. A bushel of existing investors including Causeway Media Partners, CRV, General Atlantic, Highland Capital and Kleiner Perkins Caufield & Byers also contributed to the round. The company has now raised over $97 million.

At its core, SessionM aggregates loyalty data for brands to help them understand their customer better, says company co-founder and CEO Lars Albright. “We are a customer data and engagement platform that helps companies build more loyal and profitable relationships with their consumers,” he explained.

Essentially that means, they are pulling data from a variety of sources and helping brands offer customers more targeted incentives, offers and product recommendations “We give [our users] a holistic view of that customer and what motivates them,” he said.

Screenshot: SessionM (cropped)

To achieve this, SessionM takes advantage of machine learning to analyze the data stream and integrates with partner platforms like Salesforce, Adobe and others. This certainly fits in with Adobe’s goal to build a customer service experience system of record and Salesforce’s acquisition of Mulesoft in March to integrate data from across an organization, all in the interest of better understanding the customer.

When it comes to using data like this, especially with the advent of GDPR in the EU in May, Albright recognizes that companies need to be more careful with data, and that it has really enhanced the sensitivity around stewardship for all data-driven businesses like his.

“We’ve been at the forefront of adopting the right product requirements and features that allow our clients and businesses to give their consumers the necessary control to be sure we’re complying with all the GDPR regulations,” he explained.

The company was not discussing valuation or revenue. Their most recent round prior to today’s announcement, was a Series D in 2016 for $35 million also led by Salesforce Ventures.

SessionM, which was founded in 2011, has around 200 employees with headquarters in downtown Boston. Customers include Coca-Cola, L’Oreal and Barney’s.

Cogito scores $37M as AI-driven sentiment analysis biz grows

Cogito announced a $37 million Series C investment today led by Goldman Sachs Growth Equity. Previous investors Salesforce Ventures and OpenView also chipped in. Mark Midle of Goldman Sachs’ Merchant Banking Division, has joined Cogito’s Board of Directors

The company has raised over $64 million since it emerged from the MIT Human Dynamics Lab back in 2007 trying to use the artificial intelligence technology available at the time to understand sentiment and apply it in a business context.

While it took some time for the technology to catch up with the vision, and find the right use case, company CEO and founder Joshua Feast says today they are helping customer service representatives understand the sentiment and emotional context of the person on the line and give them behavioral cues on how to proceed.

“We sell software to very large software, premium brands with many thousands of people in contact centers. The purpose of our solution is to help provide a really wonderful service experience in moments of truth,” he explained. Anyone who deals with a large company’s customer service has likely felt there is sometimes a disconnect between the person on the phone and their ability to understand your predicament and solve your problem.

Cogito in action giving customer service reps real-time feedback.

He says using his company’s solution, which analyzes the contents of the call in real time, and provides relevant feedback, the goal is to not just complete the service call, but to leave the customer feeling good about the brand and the experience. Certainly a bad experience can have the opposite effect.

He wants to use technology to make the experience a more human interaction and he recognizes that as an organization grows, layers of business process make it harder for the customer service representative to convey that humanity. Feast believes that technology has helped create this problem and it can help solve it too.

While the company is not talking about valuation or specific revenue at this point, Feast reports that revenue has grown 3X over the last year. Among their customers are Humana and Metlife, two large insurance companies, each with thousands of customer service agents.

Cogito is based in downtown Boston with 117 employees at last count, and of course they hope to use the money to add on to that number and help scale this vision further.

“This is about scaling our organization to meet client’s needs. It’s also about deepening what we do. In a lot of ways, we are only scratching the surface [of the underlying technology] in terms of how we can use AI to support emotional connections and help organizations be more human,” Feast said.

AnyVision AI startup locks in $28M for its body and facial recognition tech

As image recognition advances continue to accelerate, startups with a mind towards security applications are seeing some major interest to turn surveillance systems more intelligent.

AnyVision is working on face, body and object recognition tech and the underlying system infrastructure to help companies deploy smart cameras for various purposes. The tech works when deployed on most types of camera and does not require highly sophisticated sensors to operate, the company says

“It’s not just how accurate the system is, it’s also how much it scales,” Etshtein tells TechCrunch. “You can put more than 20 concurrent full HD camera streams on a single GPU.”

The Tel Aviv-based AI startup announced today that it has closed a $28 million Series A funding round led by Bosch. The quickly growing company already has 130 employees and has plans to open up three new offices by the year’s end.

Right now, AnyVision is working on products in a few different verticals. Its security product called “Better Tomorrow” has been a key focus for the company.

Even as tech giants in the U.S. like Amazon and Google are scrutinized for contracts with government orgs that involve facial recognition tech, Etshtein believes that their company’s solution will be an improvement over existing video surveillance technologies in terms of protecting the public’s privacy.

“Today, the video management systems basically record everything and you can see individuals faces, you can see everything.”Etshtein says. “Once our system is installed it pixelates all the faces in the stream automatically, even the operator in the control center cannot see your face because the mathematical models just represent the persons of interest.”

The company also recently released a product called FaceKey that leverages the company’s facial recognition tech for verification purposes, allowing customers with phones that are not just the iPhone X to use their face as a two-factor authentication method in things like banking apps. Now, there have certainly been a lot of issues with maintaining the needed accuracy which is exactly what has made FaceID so novel, but AnyVision CEO Eylon Etshtein claims to have “cracked the problem.”

Other products AnyVision is working on include some new efforts in the sports and entertainment spaces as well as a retail analytics platform that they’re hoping to release later this summer.

Machine learning boosts Swiss startup’s shot at human-powered land speed record

The current world speed record for riding a bike down a straight, flat road was set in 2012 by a Dutch team, but the Swiss have a plan to topple their rivals — with a little help from machine learning. An algorithm trained on aerodynamics could streamline their bike, perhaps cutting air resistance by enough to set a new record.

Currently the record is held by Sebastiaan Bowier, who in 2012 set a record of 133.78 km/h, or just over 83 mph. It’s hard to imagine how his bike, which looked more like a tiny landbound rocket than any kind of bicycle, could be significantly improved on.

But every little bit counts when records are measured down a hundredth of a unit, and anyway, who knows but that some strange new shape might totally change the game?

To pursue this, researchers at the École Polytechnique Fédérale de Lausanne’s Computer Vision Laboratory developed a machine learning algorithm that, trained on 3D shapes and their aerodynamic qualities, “learns to develop an intuition about the laws of physics,” as the university’s Pierre Baqué said.

“The standard machine learning algorithms we use to work with in our lab take images as input,” he explained in an EPFL video. “An image is a very well-structured signal that is very easy to handle by a machine-learning algorithm. However, for engineers working in this domain, they use what we call a mesh. A mesh is a very large graph with a lot of nodes that is not very convenient to handle.”

Nevertheless, the team managed to design a convolutional neural network that can sort through countless shapes and automatically determine which should (in theory) provide the very best aerodynamic profile.

“Our program results in designs that are sometimes 5-20 percent more aerodynamic than conventional methods,” Baqué said. “But even more importantly, it can be used in certain situations that conventional methods can’t. The shapes used in training the program can be very different from the standard shapes for a given object. That gives it a great deal of flexibility.”

That means that the algorithm isn’t just limited to slight variations on established designs, but it also is flexible enough to take on other fluid dynamics problems like wing shapes, windmill blades or cars.

The tech has been spun out into a separate company, Neural Concept, of which Baqué is the CEO. It was presented today at the International Conference on Machine Learning in Stockholm.

A team from the Annecy University Institute of Technology will attempt to apply the computer-honed model in person at the World Human Powered Speed Challenge in Nevada this September — after all, no matter how much computer assistance there is, as the name says, it’s still powered by a human.

Apple’s Shortcuts will flip the switch on Siri’s potential

Matthew Cassinelli
Contributor

Matthew Cassinelli is a former member of the Workflow team and works as an independent writer and consultant. He previously worked as a data analyst for VaynerMedia.

At WWDC, Apple pitched Shortcuts as a way to ”take advantage of the power of apps” and ”expose quick actions to Siri.” These will be suggested by the OS, can be given unique voice commands, and will even be customizable with a dedicated Shortcuts app.

But since this new feature won’t let Siri interpret everything, many have been lamenting that Siri didn’t get much better — and is still lacking compared to Google Assistant or Amazon Echo.

But to ignore Shortcuts would be missing out on the bigger picture. Apple’s strengths have always been the device ecosystem and the apps that run on them.

With Shortcuts, both play a major role in how Siri will prove to be a truly useful assistant and not just a digital voice to talk to.

Your Apple devices just got better

For many, voice assistants are a nice-to-have, but not a need-to-have.

It’s undeniably convenient to get facts by speaking to the air, turning on the lights without lifting a finger, or triggering a timer or text message – but so far, studies have shown people don’t use much more than these on a regular basis.

People don’t often do more than that because the assistants aren’t really ready for complex tasks yet, and when your assistant is limited to tasks inside your home or commands spoken inton your phone, the drawbacks prevent you from going deep.

If you prefer Alexa, you get more devices, better reliability, and a breadth of skills, but there’s not a great phone or tablet experience you can use alongside your Echo. If you prefer to have Google’s Assistant everywhere, you must be all in on the Android and Home ecosystem to get the full experience too.

Plus, with either option, there are privacy concerns baked into how both work on a fundamental level – over the web.

In Apple’s ecosystem, you have Siri on iPhone, iPad, Apple Watch, AirPods, HomePod, CarPlay, and any Mac. Add in Shortcuts on each of those devices (except Mac, but they still have Automator) and suddenly you have a plethora of places to execute these all your commands entirely by voice.

Each accessory that Apple users own will get upgraded, giving Siri new ways to fulfill the 10 billion and counting requests people make each month (according to Craig Federighi’s statement on-stage at WWDC).

But even more important than all the places where you can use your assistant is how – with Shortcuts, Siri gets even better with each new app that people download. There’s the other key difference: the App Store.

Actions are the most important part of your apps

iOS has always had a vibrant community of developers who create powerful, top-notch applications that push the system to its limits and take advantage of the ever-increasing power these mobile devices have.

Shortcuts opens up those capabilities to Siri – every action you take in an app can be shared out with Siri, letting people interact right there inline or using only their voice, with the app running everything smoothly in the background.

Plus, the functional approach that Apple is taking with Siri creates new opportunities for developers provide utility to people instead of requiring their attention. The suggestions feature of Shortcuts rewards “acceleration”, showing the apps that provide the most time savings and use for the user more often.

This opens the door to more specialized types of apps that don’t necessarily have to grow a huge audience and serve them ads – if you can make something that helps people, Shortcuts can help them use your app more than ever before (and without as much effort). Developers can make a great experience for when people visit the app, but also focus on actually doing something useful too.

This isn’t a virtual assistant that lives in the cloud, but a digital helper that can pair up with the apps uniquely taking advantage of Apple’s hardware and software capabilities to truly improve your use of the device.

In the most groan-inducing way possible, “there’s an app for that” is back and more important than ever. Not only are apps the centerpiece of the Siri experience, but it’s their capabilities that extend Siri’s – the better the apps you have, the better Siri can be.

Control is at your fingertips

Importantly, Siri gets all of this Shortcuts power while keeping the control in each person’s hands.

All of the information provided to the system is securely passed along by individual apps – if something doesn’t look right, you can just delete the corresponding app and the information is gone.

Siri will make recommendations based on activities deemed relevant by the apps themselves as well, so over-active suggestions shouldn’t be common (unless you’re way too active in some apps, in which case they added Screen Time for you too).

Each of the voice commands is custom per user as well, so people can ignore their apps suggestions and set up the phrases to their own liking. This means nothing is already “taken” because somebody signed up for the skill first (unless you’ve already used it yourself, of course).

Also, Shortcuts don’t require the web to work – the voice triggers might not work, but the suggestions and Shortcuts app give you a place to use your assistant voicelessly. And importantly, Shortcuts can use the full power of the web when they need to.

This user-centric approach paired with the technical aspects of how Shortcuts works gives Apple’s assistant a leg up for any consumers who find privacy important. Essentially, Apple devices are only listening for “Hey Siri”, then the available Siri domains + your own custom trigger phrases.

Without exposing your information to the world or teaching a robot to understand everything, Apple gave Siri a slew of capabilities that in many ways can’t be matched. With Shortcuts, it’s the apps, the operating system, and the variety of hardware that will make Siri uniquely qualified come this fall.

Plus, the Shortcuts app will provide a deeper experience for those who want to chain together actions and customize their own shortcuts.

There’s lots more under the hood to experiment with, but this will allow anyone to tweak & prod their Siri commands until they have a small army of custom assistant tasks at the ready.

Hey Siri, let’s get started

Siri doesn’t know all, Can’t perform any task you bestow upon it, and won’t make somewhat uncanny phone calls on your behalf.

But instead of spending time conversing with a somewhat faked “artificial intelligence”, Shortcuts will help people use Siri as an actual digital assistant – a computer to help them get things done better than they might’ve otherwise.

With Siri’s new skills extendeding to each of your Apple products (except for Apple TV and the Mac, but maybe one day?), every new device you get and every new app you download can reveal another way to take advantage of what this technology can offer.

This broadening of Siri may take some time to get used to – it will be about finding the right place for it in your life.

As you go about your apps, you’ll start seeing and using suggestions. You’ll set up a few voice commands, then you’ll do something like kick off a truly useful shortcut from your Apple Watch without your phone connected and you’ll realize the potential.

This is a real digital assistant, your apps know how to work with it, and it’s already on many of your Apple devices. Now, it’s time to actually make use of it.

In Army of None, a field guide to the coming world of autonomous warfare

The Silicon Valley-military industrial complex is increasingly in the crosshairs of artificial intelligence engineers. A few weeks ago, Google was reported to be backing out of a Pentagon contract around Project Maven, which would use image recognition to automatically evaluate photos. Earlier this year, AI researchers around the world joined petitions calling for a boycott of any research that could be used in autonomous warfare.

For Paul Scharre, though, such petitions barely touch the deep complexity, nuance, and ambiguity that will make evaluating autonomous weapons a major concern for defense planners this century. In Army of None, Scharre argues that the challenges around just the definitions of these machines will take enormous effort to work out between nations, let alone handling their effects. It’s a sobering, thoughtful, if at times protracted look at this critical topic.

Scharre should know. A former Army Ranger, he joined the Pentagon working in the Office of Secretary of Defense, where he developed some of the Defense Department’s first policies around autonomy. Leaving in 2013, he joined the DC-based think tank Center for a New American Security, where he directs a center on technology and national security. In short, he has spent about a decade on this emerging tech, and his expertise clearly shows throughout the book.

The first challenge that belies these petitions on autonomous weapons is that these systems already exist, and are already deployed in the field. Technologies like the Aegis Combat System, High-speed Anti-Radiation Missile (HARM), and the Harpy already include sophisticated autonomous features. As Scharre writes, “The human launching the Harpy decides to destroy any enemy radars within a general area in space and time, but the Harpy itself chooses the specific radar it destroys.” The weapon can loiter for 2.5 hours while it determines a target with its sensors — is it autonomous?

Scharre repeatedly uses the military’s OODA loop (for observe, orient, decide, and act) as a framework to determine the level of autonomy for a given machine. Humans can be “in the loop,” where they determine the actions of the machine, “on the loop” where they have control but the machine is mostly working independently, and “out of the loop” when machines are entirely independent of human decision-making.

The framework helps clear some of the confusion between different systems, but it is not sufficient. When machines fight machines, for instance, the speed of the battle can become so great that humans may well do more harm then good intervening. Millions of cycles of the OODA loop could be processed by a drone before a human even registers what is happening on the battlefield. A human out of the loop, therefore, could well lead to safer outcomes. It’s exactly these kinds of paradoxes that make the subject so difficult to analyze.

In addition to paradoxes, constraints are a huge theme in the book as well. Speed is one — and the price of military equipment is another. Dumb missiles are cheap, and adding automation has consistently added to the price of hardware. As Scharre notes, “Modern missiles can cost upwards of a million dollars apiece. As a practical matter, militaries will want to know that there is, in fact, a valid enemy target in the area before using an expensive weapon.”

Another constraint is simply culture. The author writes, “There is intense cultural resistance within the U.S. military to handing over jobs to uninhabited systems.” Not unlike automation in the civilian workforce, people in power want to place flesh-and-blood humans in the most complex assignments. These constraints matter, because Scharre foresees a classic arms race around these weapons as dozens of countries pursue these machines.

Humans “in the loop” may be the default today, but for how long?

At a higher level, about a third of the book is devoted to the history of automation, (generalized) AI, and the potential for autonomy, topics which should be familiar to any regular reader of TechCrunch. Another third of the book or so is a meditation on the challenges of the technology from a dual use and strategic perspective, as well as the dubious path toward an international ban.

Yet, what I found most valuable in the book was the chapter on ethics, lodged fairly late in the book’s narrative. Scharre does a superb job covering the ground of the various schools of thought around the ethics of autonomous warfare, and how they intersect and compete. He extensively analyzes and quotes Ron Arkin, a roboticist who has spent significant time thinking about autonomy in warfare. Arkin tells Scharre that “We put way too much faith in human warfighters,” and argues that autonomous weapons could theoretically be programmed never to commit a war crime unlike humans. Other activists, like Jody Williams, believe that only a comprehensive ban can ensure that such weapons are never developed in the first place.

Scharre regrets that more of these conversations don’t take into account the strategic positions of the military. He notes that international discussions on bans are led by NGOs and not by nation states, whereas all examples of successful bans have been the other way around.

Another challenge is simply that antiwar activism and anti-autonomous weapons activism are increasingly being conflated. Scharre writes, “One of the challenges in weighing the ethics of autonomous weapons is untangling which criticisms are about autonomous weapons and which are really about war.” Citing Sherman, who marched through the U.S. South in the Civil War in an aggressive pillage, the author reminds the reader that “war is hell,” and that militaries don’t choose weapons in a vacuum, but relatively against other tools in their and their competitors’ arsenals.

The book is a compendium of the various issues around autonomous weapons, although it suffers a bit from the classic problem of being too lengthy on some subjects (drone swarms) while offering limited information on others (arms control negotiations). The book also is marred at times by errors, such as “news rules of engagement” that otherwise detract from a direct and active text. Tighter editing would have helped in both cases. Given the inchoate nature of the subject, the book works as an overview, although it fails to present an opinionated narrative on where autonomy and the military should go in the future, an unsatisfying gap given the author’s extensive and unique background on the subject.

All that said, Army of None is a one-stop guide book to the debates, the challenges, and yes, the opportunities that can come from autonomous warfare. Scharre ends on exactly the right note, reminding us that ultimately, all of these machines are owned by us, and what we choose to build is within our control. “The world we are creating is one that will have intelligent machines in it, but it is not for them. It is a world for us.” We should continue to engage, and petition, and debate, but always with a vision for the future we want to realize.

Facebook’s new AI research is a real eye-opener

There are plenty of ways to manipulate photos to make you look better, remove red eye or lens flare, and so on. But so far the blink has proven a tenacious opponent of good snapshots. That may change with research from Facebook that replaces closed eyes with open ones in a remarkably convincing manner.

It’s far from the only example of intelligent “in-painting,” as the technique is called when a program fills in a space with what it thinks belongs there. Adobe in particular has made good use of it with its “context-aware fill,” allowing users to seamlessly replace undesired features, for example a protruding branch or a cloud, with a pretty good guess at what would be there if it weren’t.

But some features are beyond the tools’ capacity to replace, one of which is eyes. Their detailed and highly variable nature make it particularly difficult for a system to change or create them realistically.

Facebook, which probably has more pictures of people blinking than any other entity in history, decided to take a crack at this problem.

It does so with a Generative Adversarial Network, essentially a machine learning system that tries to fool itself into thinking its creations are real. In a GAN, one part of the system learns to recognize, say, faces, and another part of the system repeatedly creates images that, based on feedback from the recognition part, gradually grow in realism.

From left to right: “Exemplar” images, source images, Photoshop’s eye-opening algorithm, and Facebook’s method.

In this case the network is trained to both recognize and replicate convincing open eyes. This could be done already, but as you can see in the examples at right, existing methods left something to be desired. They seem to paste in the eyes of the people without much consideration for consistency with the rest of the image.

Machines are naive that way: they have no intuitive understanding that opening one’s eyes does not also change the color of the skin around them. (For that matter, they have no intuitive understanding of eyes, color, or anything at all.)

What Facebook’s researchers did was to include “exemplar” data showing the target person with their eyes open, from which the GAN learns not just what eyes should go on the person, but how the eyes of this particular person are shaped, colored, and so on.

The results are quite realistic: there’s no color mismatch or obvious stitching because the recognition part of the network knows that that’s not how the person looks.

In testing, people mistook the fake eyes-opened photos for real ones, or said they couldn’t be sure which was which, more than half the time. And unless I knew a photo was definitely tampered with, I probably wouldn’t notice if I was scrolling past it in my newsfeed. Gandhi looks a little weird, though.

It still fails in some situations, creating weird artifacts if a person’s eye is partially covered by a lock of hair, or sometimes failing to recreate the color correctly. But those are fixable problems.

You can imagine the usefulness of an automatic eye-opening utility on Facebook that checks a person’s other photos and uses them as reference to replace a blink in the latest one. It would be a little creepy, but that’s pretty standard for Facebook, and at least it might save a group photo or two.

UK report warns DeepMind Health could gain ‘excessive monopoly power’

DeepMind’s foray into digital health services continues to raise concerns. The latest worries are voiced by a panel of external reviewers appointed by the Google-owned AI company to report on its operations after its initial data-sharing arrangements with the U.K.’s National Health Service (NHS) ran into a major public controversy in 2016.

The DeepMind Health Independent Reviewers’ 2018 report flags a series of risks and concerns, as they see it, including the potential for DeepMind Health to be able to “exert excessive monopoly power” as a result of the data access and streaming infrastructure that’s bundled with provision of the Streams app — and which, contractually, positions DeepMind as the access-controlling intermediary between the structured health data and any other third parties that might, in the future, want to offer their own digital assistance solutions to the Trust.

While the underlying FHIR (aka, fast healthcare interoperability resource) deployed by DeepMind for Streams uses an open API, the contract between the company and the Royal Free Trust funnels connections via DeepMind’s own servers, and prohibits connections to other FHIR servers. A commercial structure that seemingly works against the openness and interoperability DeepMind’s co-founder Mustafa Suleyman has claimed to support.

There are many examples in the IT arena where companies lock their customers into systems that are difficult to change or replace. Such arrangements are not in the interests of the public. And we do not want to see DeepMind Health putting itself in a position where clients, such as hospitals, find themselves forced to stay with DeepMind Health even if it is no longer financially or clinically sensible to do so; we want DeepMind Health to compete on quality and price, not by entrenching legacy position,” the reviewers write.

Though they point to DeepMind’s “stated commitment to interoperability of systems,” and “their adoption of the FHIR open API” as positive indications, writing: “This means that there is potential for many other SMEs to become involved, creating a diverse and innovative marketplace which works to the benefit of consumers, innovation and the economy.”

“We also note DeepMind Health’s intention to implement many of the features of Streams as modules which could be easily swapped, meaning that they will have to rely on being the best to stay in business,” they add. 

However, stated intentions and future potentials are clearly not the same as on-the-ground reality. And, as it stands, a technically interoperable app-delivery infrastructure is being encumbered by prohibitive clauses in a commercial contract — and by a lack of regulatory pushback against such behavior.

The reviewers also raise concerns about an ongoing lack of clarity around DeepMind Health’s business model — writing: “Given the current environment, and with no clarity about DeepMind Health’s business model, people are likely to suspect that there must be an undisclosed profit motive or a hidden agenda. We do not believe this to be the case, but would urge DeepMind Health to be transparent about their business model, and their ability to stick to that without being overridden by Alphabet. For once an idea of hidden agendas is fixed in people’s mind, it is hard to shift, no matter how much a company is motivated by the public good.”

We have had detailed conversations about DeepMind Health’s evolving thoughts in this area, and are aware that some of these questions have not yet been finalised. However, we would urge DeepMind Health to set out publicly what they are proposing,” they add. 

DeepMind has suggested it wants to build healthcare AIs that are capable of charging by results. But Streams does not involve any AI. The service is also being provided to NHS Trusts for free, at least for the first five years — raising the question of how exactly the Google-owned company intends to recoup its investment.

Google of course monetizes a large suite of free-at-the-point-of-use consumer products — such as the Android mobile operating system; its cloud email service Gmail; and the YouTube video sharing platform, to name three — by harvesting people’s personal data and using that information to inform its ad targeting platforms.

Hence the reviewers’ recommendation for DeepMind to set out its thinking on its business model to avoid its intentions vis-a-vis people’s medical data being viewed with suspicion.

The company’s historical modus operandi also underlines the potential monopoly risks if DeepMind is allowed to carve out a dominant platform position in digital healthcare provision — given how effectively its parent has been able to turn a free-for-OEMs mobile OS (Android) into global smartphone market OS dominance, for example.

So, while DeepMind only has a handful of contracts with NHS Trusts for the Streams app and delivery infrastructure at this stage, the reviewers’ concerns over the risk of the company gaining “excessive monopoly power” do not seem overblown.

They are also worried about DeepMind’s ongoing vagueness about how exactly it works with its parent Alphabet, and what data could ever be transferred to the ad giant — an inevitably queasy combination when stacked against DeepMind’s handling of people’s medical records.

“To what extent can DeepMind Health insulate itself against Alphabet instructing them in the future to do something which it has promised not to do today? Or, if DeepMind Health’s current management were to leave DeepMind Health, how much could a new CEO alter what has been agreed today?” they write.

“We appreciate that DeepMind Health would continue to be bound by the legal and regulatory framework, but much of our attention is on the steps that DeepMind Health have taken to take a more ethical stance than the law requires; could this all be ended? We encourage DeepMind Health to look at ways of entrenching its separation from Alphabet and DeepMind more robustly, so that it can have enduring force to the commitments it makes.”

Responding to the report’s publication on its website, DeepMind writes that it’s “developing our longer-term business model and roadmap.”

“Rather than charging for the early stages of our work, our first priority has been to prove that our technologies can help improve patient care and reduce costs. We believe that our business model should flow from the positive impact we create, and will continue to explore outcomes-based elements so that costs are at least in part related to the benefits we deliver,” it continues.

So it has nothing to say to defuse the reviewers’ concerns about making its intentions for monetizing health data plain — beyond deploying a few choice PR soundbites.

On its links with Alphabet, DeepMind also has little to say, writing only that: “We will explore further ways to ensure there is clarity about the binding legal frameworks that govern all our NHS partnerships.”

“Trusts remain in full control of the data at all times,” it adds. “We are legally and contractually bound to only using patient data under the instructions of our partners. We will continue to make our legal agreements with Trusts publicly available to allow scrutiny of this important point.”

“There is nothing in our legal agreements with our partners that prevents them from working with any other data processor, should they wish to seek the services of another provider,” it also claims in response to additional questions we put to it.

We hope that Streams can help unlock the next wave of innovation in the NHS. The infrastructure that powers Streams is built on state-of-the-art open and interoperable standards, known as FHIR. The FHIR standard is supported in the UK by NHS Digital, NHS England and the INTEROPen group. This should allow our partner trusts to work more easily with other developers, helping them bring many more new innovations to the clinical frontlines,” it adds in additional comments to us.

“Under our contractual agreements with relevant partner trusts, we have committed to building FHIR API infrastructure within the five year terms of the agreements.”

Asked about the progress it’s made on a technical audit infrastructure for verifying access to health data, which it announced last year, it reiterated the wording on its blog, saying: “We will remain vigilant about setting the highest possible standards of information governance. At the beginning of this year, we appointed a full time Information Governance Manager to oversee our use of data in all areas of our work. We are also continuing to build our Verifiable Data Audit and other tools to clearly show how we’re using data.”

So developments on that front look as slow as we expected.

The Google-owned U.K. AI company began its push into digital healthcare services in 2015, quietly signing an information-sharing arrangement with a London-based NHS Trust that gave it access to around 1.6 million people’s medical records for developing an alerts app for a condition called Acute Kidney Injury.

It also inked an MoU with the Trust where the pair set out their ambition to apply AI to NHS data sets. (They even went so far as to get ethical signs-off for an AI project — but have consistently claimed the Royal Free data was not fed to any AIs.)

However, the data-sharing collaboration ran into trouble in May 2016 when the scope of patient data being shared by the Royal Free with DeepMind was revealed (via investigative journalism, rather than by disclosures from the Trust or DeepMind).

None of the ~1.6 million people whose non-anonymized medical records had been passed to the Google-owned company had been informed or asked for their consent. And questions were raised about the legal basis for the data-sharing arrangement.

Last summer the U.K.’s privacy regulator concluded an investigation of the project — finding that the Royal Free NHS Trust had broken data protection rules during the app’s development.

Yet despite ethical questions and regulatory disquiet about the legality of the data sharing, the Streams project steamrollered on. And the Royal Free Trust went on to implement the app for use by clinicians in its hospitals, while DeepMind has also signed several additional contracts to deploy Streams to other NHS Trusts.

More recently, the law firm Linklaters completed an audit of the Royal Free Streams project, after being commissioned by the Trust as part of its settlement with the ICO. Though this audit only examined the current functioning of Streams. (There has been no historical audit of the lawfulness of people’s medical records being shared during the build and test phase of the project.)

Linklaters did recommend the Royal Free terminates its wider MoU with DeepMind — and the Trust has confirmed to us that it will be following the firm’s advice.

“The audit recommends we terminate the historic memorandum of understanding with DeepMind which was signed in January 2016. The MOU is no longer relevant to the partnership and we are in the process of terminating it,” a Royal Free spokesperson told us.

So DeepMind, probably the world’s most famous AI company, is in the curious position of being involved in providing digital healthcare services to U.K. hospitals that don’t actually involve any AI at all. (Though it does have some ongoing AI research projects with NHS Trusts too.)

In mid 2016, at the height of the Royal Free DeepMind data scandal — and in a bid to foster greater public trust — the company appointed the panel of external reviewers who have now produced their second report looking at how the division is operating.

And it’s fair to say that much has happened in the tech industry since the panel was appointed to further undermine public trust in tech platforms and algorithmic promises — including the ICO’s finding that the initial data-sharing arrangement between the Royal Free and DeepMind broke U.K. privacy laws.

The eight members of the panel for the 2018 report are: Martin Bromiley OBE; Elisabeth Buggins CBE; Eileen Burbidge MBE; Richard Horton; Dr. Julian Huppert; Professor Donal O’Donoghue; Matthew Taylor; and Professor Sir John Tooke.

In their latest report the external reviewers warn that the public’s view of tech giants has “shifted substantially” versus where it was even a year ago — asserting that “issues of privacy in a digital age are if anything, of greater concern.”

At the same time politicians are also gazing rather more critically on the works and social impacts of tech giants.

Although the U.K. government has also been keen to position itself as a supporter of AI, providing public funds for the sector and, in its Industrial Strategy white paper, identifying AI and data as one of four so-called “Grand Challenges” where it believes the U.K. can “lead the world for years to come” — including specifically name-checking DeepMind as one of a handful of leading-edge homegrown AI businesses for the country to be proud of.

Still, questions over how to manage and regulate public sector data and AI deployments — especially in highly sensitive areas such as healthcare — remain to be clearly addressed by the government.

Meanwhile, the encroaching ingress of digital technologies into the healthcare space — even when the techs don’t even involve any AI — are already presenting major challenges by putting pressure on existing information governance rules and structures, and raising the specter of monopolistic risk.

Asked whether it offers any guidance to NHS Trusts around digital assistance for clinicians, including specifically whether it requires multiple options be offered by different providers, the NHS’ digital services provider, NHS Digital, referred our question on to the Department of Health (DoH), saying it’s a matter of health policy.

The DoH in turn referred the question to NHS England, the executive non-departmental body which commissions contracts and sets priorities and directions for the health service in England.

And at the time of writing, we’re still waiting for a response from the steering body.

Ultimately it looks like it will be up to the health service to put in place a clear and robust structure for AI and digital decision services that fosters competition by design by baking in a requirement for Trusts to support multiple independent options when procuring apps and services.

Without that important check and balance, the risk is that platform dynamics will quickly dominate and control the emergent digital health assistance space — just as big tech has dominated consumer tech.

But publicly funded healthcare decisions and data sets should not simply be handed to the single market-dominating entity that’s willing and able to burn the most resource to own the space.

Nor should government stand by and do nothing when there’s a clear risk that a vital area of digital innovation is at risk of being closed down by a tech giant muscling in and positioning itself as a gatekeeper before others have had a chance to show what their ideas are made of, and before even a market has had the chance to form. 

Microsoft acquires conversational AI startup Semantic Machines to help bots sound more lifelike

Microsoft announced today that it has acquired Semantic Machines, a Berkeley-based startup that wants to solve one of the biggest challenges in conversational AI: making chatbots sound more human and less like, well, bots.

In a blog post, Microsoft AI & Research chief technology officer David Ku wrote that “with the acquisition of Semantic Machines, we will establish a conversational AI center of excellence in Berkeley to push forward the boundaries of what is possible in language interfaces.”

According to Crunchbase, Semantic Machines was founded in 2014 and raised about $20.9 million in funding from investors including General Catalyst and Bain Capital Ventures.

In a 2016 profile, co-founder and chief scientist Dan Klein told TechCrunch that “today’s dialog technology is mostly orthogonal. You want a conversational system to be contextual so when you interpret a sentence things don’t stand in isolation.” By focusing on memory, Semantic Machines’ AI can produce conversations that not only answer or predict questions more accurately, but also flow naturally.

Instead of building its own consumer products, Semantic Machines focused on enterprise customers. This means it will fit in well with Microsoft’s conversational AI-based products, including Microsoft Cognitive Services and Azure Bot Service, which are used by one million and 300,000 developers, respectively, and virtual assistants Cortana and Xiaolce.

The new AI-powered Google News app is now available for iOS

Google teased a new version of its News app with AI smarts at its I/O event last week, and today that revamped app landed for iOS and Android devices in 127 countries. The redesigned app replaces the previous Google Play Newsstand app.

The idea is to make finding and consuming news easier than ever, whilst providing an experience that’s customized to each reader and supportive of media publications. The AI element is designed to learn from what you read to help serve you a better selection of content over time, while the app is presented with a clear and clean layout.

Opening the app brings up the tailored ‘For You’ tab which acts as a quick briefing, serving up the top five stories “of the moment” and a tailored selection of opinion articles and longer reads below it.

The next section — ‘Headlines’ — dives more deeply into the latest news, covering global, U.S., business, technology, entertainment, sports, science and health segments. Clicking a story pulls up ‘Full Coverage’ mode, which surfaces a range of content around a topic including editorial and opinion pieces, tweets, videos and a timeline of events.

 

Favorites is a tab that allows customization set by the user — without AI. It works as you’d imagine, letting you mark out preferred topics, news sources and locations to filter your reads. There’s also an option for saved searches and stories which can be quickly summoned.

The final section is ‘Newsstand’ which, as the name suggests aggregates media. Google said last week that it plans to offer over 1,0000 magazine titles you can follow by tapping a star icon or subscribing to. It currently looks a little sparse without specific magazine titles, but we expect that’ll come soon.

As part of that, another feature coming soon is “Subscribe with Google, which lets publications offer subscription-based content. The process of subscribing will use a user’s Google account, and the payment information they already have on file. Then, the paid content becomes available across Google platforms, including Google News, Google Search and publishers’ own websites.

China’s SenseTime, the world’s highest valued AI startup, raises $600M

The future of artificial intelligence (AI), the technology that is seen as potentially impacting almost every industry on the planet, is widely acknowledged to be a war between tech firms in America and China.

In a notable side-note to that battle, China now has the world’s highest-valued AI startup after SenseTime, a company founded in 2014, announced a $600 million Series C investment round. A source with knowledge of discussions told TechCrunch that the round values the company at over $4.5 billion, while it is also raising an extension to this round. That marks a hefty increase on the company’s most recent $1.5 billion valuation when it raised a $410 million Series B last year.

SenseTime CEO Li Xu said the company plans to use the capital to expand its presence overseas and “widen the scope for more industrial application of AI.”

Beyond the high figures involved — the round is a record fundraising for an AI company worldwide — SenseTime’s investment efforts are notable because of the names that have backed it.

Principally that’s Alibaba, the $429 billion e-commerce giant, which led this Series C round and is reportedly now SenseTime’s largest single investor, according to Bloomberg.

Beyond that, U.S. chipmaker giant Qualcomm signed up last year — seemingly as an early participant in this round — while Singapore’s sovereign fund Temasek and China’s largest electronics retailer Suning, which has taken investment from Alibaba, entered the round as new backers. Indeed, Suning’s push to for its store of the future, which was started by that Alibaba investment, uses SenseTime to power its facial recognition payment at staff-less checkouts and also for customer analysis using big data systems.

“SenseTime is doing pioneering work in artificial intelligence. We are especially impressed by their R&D capabilities in deep learning and visual computing. Our business at Alibaba is already seeing tangible benefits from our investments in AI and we are committed to further investment,” said Joe Tsai, Alibaba’s executive vice chairman.

SenseTime said it has more than 400 customers across a range of verticals including fintech, automotive, fintech, smartphones, smart city development and more that include Honda, Nvidia, China’s UnionPay, Weibo, China Merchants Bank, Huawei, Oppo, Vivo and Xiaomi.

Perhaps its most visible partner is the Chinese government, which uses its systems for its national surveillance system. SenseTime process data captured by China’s 170 million CCTV cameras and newer systems which include smart glasses worn by police offers on the street.

China has placed vast emphasis on tech development, with AI one of its key flagposts.

A government program aims to make the country the world leader in AI technology by 2030, the New York Times reported, by which time it is estimated that the industry could be worth some $150 billion per year. SenseTime’s continued development fees directly into that ambition.

“AI is really changing every profession and every industry. There’s almost nothing that won’t be touched by AI,” investor Kai-Fu Lee, formerly the head of Google in China, said at a TechCrunch event back in 2016.

Even two years ago, the potential was evident, with Lee explaining that teaching, medicine and healthcare were obvious areas for disruption.

Perhaps the main difference between the state of AI development in the U.S. and China is that, in America, much of the technology is being developed in big tech firms like Amazon and Google. In China, however, companies like SenseTime and its rival Megvii (which develops the Face++ platform) are independent entities that operate with the financial backing of giants like Alibaba.

US-China biotech startup XtalPi lands $15M from Google, Tencent and Sequoia

 Google continues to increase its presence in China after it joined Sequoia China and Tencent in a $15 million investment for XtalPi, a U.S.-China biotech firm that uses artificial intelligence and computing to accelerate the development of new drugs. The search giant remains blocked in China, but that hasn’t stopped it from making a series of moves in recent months. It is opening an… Read More

Google declares war against Alexa and Siri at CES 2018

TwitterFacebook

It’s an artificial intelligence showdown.

This year at CES, the world’s largest electronics trade show (running Jan. 9-12), thousands of companies will travel to Las Vegas to show off their newest products and build new partnerships. But this time around, one unusual exhibitor stands out from the rest: Google.

It’s the first time in many years that Google will have its own, large, standalone booth in the middle of the convention center. But the search giant has gone far beyond buying space on the showroom floor. It’s also commissioned several large advertisements around the city, including one you simply can’t miss. Read more…

More about Google, Amazon, Ces, Artificial Intelligence, and Ai

VW taps Nvidia to build AI into its new electric microbus and beyond

 Nvidia will power artificial intelligence technology built into its future vehicles, including the new I.D. Buzz, its all-electric retro-inspired camper van concept. The partnership between the two companies also extends to the future vehicles, and will initially focus on so-called “Intelligent Co-Pilot” features, including using sensor data to make driving easier, safer and… Read More

Google has planted its flag at CES

 Google’s here, and it’s planning something big. The company’s presence is impossible to miss as you drive down Paradise Road toward the Las Vegas Convention Center. Like much the rest of the show, the company’s parking lot booth is still under construction today, but the giant, black and white “Hey Google” sign is already hanging above it, visible from… Read More

Horizons Ventures backs AI startup Fano Labs in first Hong Kong investment

 Horizons Ventures, the VC firm founded by Hong Kong’s richest man Li Ka-Shing, has made a rare early-stage investment after it backed AI startup Fano Labs.
Horizons has invested in the likes of Facebook, Razer, Slack, Improbable, Spotify and more, and now it is putting undisclosed money into Fano Labs, which recently graduated AI accelerator program Zeroth. This deal also marks the… Read More

Hello Aibo, goodbye Alexa: Sony turns robot dog into AI assistant

TwitterFacebook

That robotic dog you wanted as a kid is back. And sadly, it’s just as expensive.

Sony had announced that after more than a decade since retiring its robot dog product, the Aibo will be coming back for real.

Image: aibo/sony/screenshot

The new Aibo has also learnt some new tricks. Its AI capability will allow it to learn and recognise people’s faces, and remember and avoid obstacles in a room.

It’ll also be voice-capable and cloud connected, being able to record photos and save them online. For example, saying “take a picture” will trigger the Aibo to take a shot and send it to the cloud, accessible later from a companion app.  Read more…

More about Sony, Japan, Artificial Intelligence, Alexa, and Personal Assistant

Powered by WPeMatico

Primer helps governments and corporations monitor and understand the world’s information

 When Google was founded in 1998, its goal was to organize the world’s information. And for the most part, mission accomplished — but in 19 years the goal post has moved forward and indexing and usefully presenting information isn’t enough. As machine learning matures, it’s becoming feasible for the first time to actually summarize and contextualize the world’s… Read More

Powered by WPeMatico

Let's all take a deep breath and stop freaking out about Facebook's bots 'inventing' a new language

TwitterFacebook

Tesla CEO Elon Musk made headlines last week when he tweeted about his frustrations that Mark Zuckerberg, ever the optimist, doesn’t fully understand the potential danger posed by artificial intelligence. 

So when media outlets began breathlessly re-reporting a weeks-old story that Facebook’s AI-trained chatbots “invented” their own language, it’s not surprising the story caught more attention than it did the first time around.

Understandable, perhaps, but it’s exactly the wrong thing to be focusing on. The fact that Facebook’s bots “invented” a new way to communicate wasn’t even the most shocking part of the research to begin with. Read more…

More about Tech, Facebook, Artificial Intelligence, Apps And Software, and Tech

Powered by WPeMatico

iRobot to acquire its biggest European distributor for $141M

 Consumer robot maker iRobot is to acquire its largest European distributor, Robopolis, in a cash deal worth $141 million. The company said it’s signed a definitive agreement to acquire the privately-held, French company, with the acquisition expected to close in October 2017. Read More

Powered by WPeMatico

Kakao is putting speech recognition tech into cars from Hyundai and Kia

 Less than a month after announcing plans to spin out its transportation and mobility business, Korean tech firm Kakao has inked deals to put hands-free systems inside cars from Korea’s second largest automotive firm Hyundai and its Kia affiliate.
Kakao is best known for operating Korea’s top messaging app, Kakao Talk, which is installed on over 95 percent of the country’s… Read More

Powered by WPeMatico

VCs determined to replace your job keep AI’s funding surge rolling in Q2

 These are good times for AI entrepreneurs. Venture, corporate and seed investors have put an estimated $3.6 billion into AI and machine learning companies this year, according to Crunchbase data. That’s more than they invested in all of 2016, marking the largest recorded sum ever put into the space in a comparable period. Read More

Powered by WPeMatico

TrueFace.AI busts facial recognition imposters

TwitterFacebook

Facial recognition technology is more prevalent than ever before. It’s being used to identify people in airports, put a stop to child sex trafficking, and shame jaywalkers

But the technology isn’t perfect. One major flaw: It sometimes can’t tell the difference between a living person’s face and a photo of that person held up in front of a scanner. 

TrueFace.AI facial recognition is trying to fix that flaw. Launched on Product Hunt in June, it’s meant to detect “picture attacks.”

The company originally created Chui in 2014 to work with customized smart homes. Then they realized clients were using it more for security purposes, and TrueFace.AI was born.  Read more…

More about Tech, Security, Artificial Intelligence, Innovation, and Ai

Powered by WPeMatico

After beating the world’s elite Go players, Google’s AlphaGo AI is retiring

 Google’s AlphaGo — the AI developed to tackle the world’s most demanding strategy game — is stepping down from competitive matches after defeating the world’s best talent. The latest to succumb is Go’s top-ranked player, Ke Jie, who lost 3-0 in a series hosted in China this week. The AI, developed by London-based DeepMind, which was acquired by Google… Read More

Powered by WPeMatico

Chinese authorities banned the broadcast of a match between top Go player and AlphaGo AI

TwitterFacebook

A Go match between the world’s top player, Ke Jie, and Google’s AlphaGo that took place this week was censored by authorities, reports Quartz.

The AI beat Ke Jie in yet another match today, securing a win in the three-part match.

Three journalists have reported receiving verbal directives barring their news organisations from broadcasting the match — as well as the Go and AI summit held in Wuzhen, east China. 

One journalist reported being barred from even mentioning Google’s name while reporting on the event, while another said that while they could mention Google, they were barred from writing about Google’s products. Read more…

More about Google, China, Censorship, Artificial Intelligence, and Alphago

Powered by WPeMatico

Project recreates cities in rich 3D from images harvested online

 People are taking photos and videos all over major cities, all the time, from every angle. Theoretically, with enough of them, you could map every street and building — wait, did I say theoretically? I meant in practice, as the VarCity project has demonstrated with Zurich, Switzerland. Read More

Powered by WPeMatico

DeepGraph feeds enterprise sales teams with hyper-targeted warm leads

 The best way to grow sales is to better understand sales, but unfortunately that’s often easier said than done. Kemvi, a seed-stage startup, is launching out of stealth today to announce DeepGraph, which helps sales teams reach the right potential customers at the right time. The company has closed north of $1 million in seed financing from Seabed VC, Neotribe Ventures, Kepha… Read More

Powered by WPeMatico

Media Prima buys Rev Asia for $24M to create Malaysia’s largest digital media platform

 The U.S. isn’t the only market where media companies are consolidating to offer an advertising platform to rival Facebook and Google.
While AOL (which owns TechCrunch) is in the process of acquiring Yahoo, over in Malaysia a similar consolidation was announced this week — although not quite on the scale of AOL-Yahoo (Oath?!) and its $4.48 billion price tag. Media Prima, a… Read More

Powered by WPeMatico

Google's new AutoDraw wants to make drawing easier for everyone

TwitterFacebook

Your doodles are about to get a whole lot better.

Part of Google’s new A.I. Experiments collection, AutoDraw is like an AI-powered Microsoft Paint. The app combines conventional doodling with art from professionals to enhance your doodles and help create better art. 

The app works by trying to guess what you’re drawing and then offering alternatives for you to build on. 

Image: google

Google calls AutoDraw “a drawing tool for the rest of us;” that is, people who aren’t professional designers. What it really does is recognize what you’re trying to draw and replace that with a version drawn by an artist. The tool turned my terrible doodle of a cake that really could have been anything, and offered me this great cake instead.  Read more…

More about Drawing, Artificial Intelligence, Google, and Tech

Powered by WPeMatico

Commission your own traffic and construction studies without ever leaving bed using SpaceKnow

 The number of things that can be done from the comfort of one’s own bed has increased in recent years — shopping, banking and now geospatial analytics. Ok, it doesn’t sound sexy but it might give you a leg up the next time your friend starts an arcane argument with you over whose neighborhood historically has more vehicles on the road. With SpaceKnow’s online… Read More

Powered by WPeMatico

6 River Systems unveils warehouse robots that show workers the way

 When Amazon acquired Kiva Systems in 2012, other retailers and third-party fulfillment centers panicked. The e-commerce giants took Kiva’s robots off the market, leaving their competitors without an important productivity tool. Lots of newcomers have cropped up to help warehouses keep up with demand since then. But one of the most hotly anticipated robots in this space was under… Read More

Powered by WPeMatico

Matroid can watch videos and detect anything within them

 If a picture is worth a thousand words, a video is worth that times the frame rate. Matroid, a computer vision startup launching out of stealth today, enables anyone to take advantage of the information inherently embedded in video. You can build your own detector within the company’s intuitive, non-technical, web platform to detect people and most other objects. Reza Zadeh, founder… Read More

Powered by WPeMatico

Cognitiv+ is using AI for contract analysis and tracking

 Another legal tech startup coming out of the UK: Cognitiv+ is applying artificial intelligence to automate contract analysis and management, offering businesses a way to automate staying on top of legal risks, obligations and changing regulatory landscapes. Read More

Powered by WPeMatico

Goodyear’s AI tire concept can read the road and adapt on the fly

 Goodyear is thinking ahead to how tires – yes, tires – might change as autonomous driving technology alters vehicle design, and as available technologies like in-vehicle and embedded machine learning and AI make it possible to do more with parts of the car that were previously pretty static, like its wheels. Its new Eagle 360 Urban tire concept design builds on the work it… Read More

Powered by WPeMatico

IBM and Salesforce partner to sell Watson and Einstein

Server room in data center Two of the best-marketed names in artificial intelligence are coming together to pitch their wares to a sea of unwitting rubes new customers with the announcement that IBM and Salesforce are going to partner. The new partnership amounts to a way for IBM to sell consulting services across both Salesforce’s Einstein and IBM’s Watson AI-branded businesses. Insights from Watson will now… Read More

Powered by WPeMatico

Ozlo releases a suite of APIs to power your next conversational AI

Illustration of laptop connected to bookshelf Building on its promise to give the entrenched a run for their money, conversational AI startup Ozlo is making its meticulously crafted knowledge layer available for purchase today. Ozlo’s new suite of APIs that includes tools for both expressing knowledge and understanding language will help to democratize the creation of conversational AI assistants. In the spirit of the expert systems… Read More

Powered by WPeMatico

Ozlo releases a suite of APIs to power your next conversational AI

Illustration of laptop connected to bookshelf Building on its promise to give the entrenched a run for their money, conversational AI startup Ozlo is making its meticulously crafted knowledge layer available for purchase today. Ozlo’s new suite of APIs that includes tools for both expressing knowledge and understanding language will help to democratize the creation of conversational AI assistants. In the spirit of the expert systems… Read More

Powered by WPeMatico

Chat app Line is developing an AI assistant and Amazon Echo-style smart speaker

line nyse 2 Messaging app Line is taking a leaf out of the books of Amazon, Google and others after it launched its own artificial intelligence platform. A voice-powered concierge service called Clova — short for “Cloud Virtual Assistant” — is the centerpiece of the service, much like Amazon’s Alexa, Microsoft’s Cortana and Google Assistant. Beyond the assistant… Read More

Powered by WPeMatico

UK’s long-delayed digital strategy looks to AI but is locked to Brexit

matt-hancock-uk-digital-director2 The UK government is due to publish its long awaited Digital Strategy later today, about a year later than originally slated. Existing delays having been compounded by the shock of Brexit. Drafts of the strategy framework seen by TechCrunch suggest its scope and ambition vis-a-vis the digital technologies has been pared back and repositioned vs earlier formulations of the plan. Read More

Powered by WPeMatico

Superintelligent AI explains Softbank’s push to raise a $100BN Vision Fund

masayoshi son Anyone who’s seen Softbank CEO Masayoshi Son give a keynote speech will know he rarely sticks to the standard industry conference playbook. And his turn on the stage at Mobile World Congress this morning was no different, with Son making like Eldon Tyrell and telling delegates about his personal belief in a looming computing Singularity… Read More

Powered by WPeMatico

Not another AI post

75fondo011 This post is about a better world brought by human ingenuity. It’s about a human opportunity, an invitation to founders and investors in advanced economies to come and help us change the lives of billions of humans. Come join the movement to help mankind move forward for a better, fairer future. It’s time! Read More

Powered by WPeMatico

Not another AI post

75fondo011 This post is about a better world brought by human ingenuity. It’s about a human opportunity, an invitation to founders and investors in advanced economies to come and help us change the lives of billions of humans. Come join the movement to help mankind move forward for a better, fairer future. It’s time! Read More

Powered by WPeMatico

Conversational AI and the road ahead

invisible-bot In recent years, we’ve seen an increasing number of so-called “intelligent” digital assistants being introduced on various devices. Although the technology behind these applications keeps getting better, there’s still a tendency for people to be disappointed by their capabilities — the expectation of “intelligence” is not being met. Read More

Powered by WPeMatico

Super Smash Borg Melee: AI takes on top players of the classic Nintendo fighting game

smashbros You can add the cult classic Super Smash Bros Melee to the list of games soon to be dominated by AIs. Research at MIT’s Computer Science and Artificial Intelligence Laboratory has produced a computer player superior to the drones you can already fight in the game. It’s good enough that it held its own against globally-ranked players. Read More

Powered by WPeMatico

Voysis raises $8 million to help it become the the Twillio of voice AI

peter-cahill-voysis-1 Voice-powered artificial intelligence is not something that’s easy to set up for just any business, even if it might have real benefits in terms of driving sales or improving customer experience. Voysis is a startup that wants to change that, with an AI platform that can parse natural language input, and that works effectively in specific domains including ecommerce, entertainment and more. Read More

Powered by WPeMatico

Facebook on course to be the WeChat of the West, says Gartner

whatsapp-messenger It’s the beginning of the end for smartphone apps as we have known and tapped on them, reckons Gartner. The analyst is calling the start of a “post-apps” era, based on changes in consumer interactions that appear driven, in large part, by the rise of dominant messaging platforms designed to consume more and more of mobile users’ time and attention. Read More

Powered by WPeMatico