machine learning

Auto Added by WPeMatico

These stoner hackers want machine learning to save us from sick weed

TwitterFacebook

Nothing harshes a good mellow like sick buds. Thankfully, there may one day be an app for that. 

Hidden from the hazy Friday afternoon of Las Vegas, tucked away in the basement of the Flamingo casino, a group of likeminded hackers and security researchers gathered to explore “DIY cannabis tech” at DEF CON’s Cannabis Village. One researcher in particular, Harry Moreno, told the rather laid-back crowd that he believed that machine learning could one day solve a huge problem for home-grow enthusiasts: determining whether or not, and in what capacity, a marijuana plant is sick.

More about Marijuana, Machine Learning, Def Con, Tech, and Cybersecurity

SessionM customer loyalty data aggregator snags $23.8 M investment

SessionM announced a $23.8 million Series E investment led by Salesforce Ventures. A bushel of existing investors including Causeway Media Partners, CRV, General Atlantic, Highland Capital and Kleiner Perkins Caufield & Byers also contributed to the round. The company has now raised over $97 million.

At its core, SessionM aggregates loyalty data for brands to help them understand their customer better, says company co-founder and CEO Lars Albright. “We are a customer data and engagement platform that helps companies build more loyal and profitable relationships with their consumers,” he explained.

Essentially that means, they are pulling data from a variety of sources and helping brands offer customers more targeted incentives, offers and product recommendations “We give [our users] a holistic view of that customer and what motivates them,” he said.

Screenshot: SessionM (cropped)

To achieve this, SessionM takes advantage of machine learning to analyze the data stream and integrates with partner platforms like Salesforce, Adobe and others. This certainly fits in with Adobe’s goal to build a customer service experience system of record and Salesforce’s acquisition of Mulesoft in March to integrate data from across an organization, all in the interest of better understanding the customer.

When it comes to using data like this, especially with the advent of GDPR in the EU in May, Albright recognizes that companies need to be more careful with data, and that it has really enhanced the sensitivity around stewardship for all data-driven businesses like his.

“We’ve been at the forefront of adopting the right product requirements and features that allow our clients and businesses to give their consumers the necessary control to be sure we’re complying with all the GDPR regulations,” he explained.

The company was not discussing valuation or revenue. Their most recent round prior to today’s announcement, was a Series D in 2016 for $35 million also led by Salesforce Ventures.

SessionM, which was founded in 2011, has around 200 employees with headquarters in downtown Boston. Customers include Coca-Cola, L’Oreal and Barney’s.

Machine learning boosts Swiss startup’s shot at human-powered land speed record

The current world speed record for riding a bike down a straight, flat road was set in 2012 by a Dutch team, but the Swiss have a plan to topple their rivals — with a little help from machine learning. An algorithm trained on aerodynamics could streamline their bike, perhaps cutting air resistance by enough to set a new record.

Currently the record is held by Sebastiaan Bowier, who in 2012 set a record of 133.78 km/h, or just over 83 mph. It’s hard to imagine how his bike, which looked more like a tiny landbound rocket than any kind of bicycle, could be significantly improved on.

But every little bit counts when records are measured down a hundredth of a unit, and anyway, who knows but that some strange new shape might totally change the game?

To pursue this, researchers at the École Polytechnique Fédérale de Lausanne’s Computer Vision Laboratory developed a machine learning algorithm that, trained on 3D shapes and their aerodynamic qualities, “learns to develop an intuition about the laws of physics,” as the university’s Pierre Baqué said.

“The standard machine learning algorithms we use to work with in our lab take images as input,” he explained in an EPFL video. “An image is a very well-structured signal that is very easy to handle by a machine-learning algorithm. However, for engineers working in this domain, they use what we call a mesh. A mesh is a very large graph with a lot of nodes that is not very convenient to handle.”

Nevertheless, the team managed to design a convolutional neural network that can sort through countless shapes and automatically determine which should (in theory) provide the very best aerodynamic profile.

“Our program results in designs that are sometimes 5-20 percent more aerodynamic than conventional methods,” Baqué said. “But even more importantly, it can be used in certain situations that conventional methods can’t. The shapes used in training the program can be very different from the standard shapes for a given object. That gives it a great deal of flexibility.”

That means that the algorithm isn’t just limited to slight variations on established designs, but it also is flexible enough to take on other fluid dynamics problems like wing shapes, windmill blades or cars.

The tech has been spun out into a separate company, Neural Concept, of which Baqué is the CEO. It was presented today at the International Conference on Machine Learning in Stockholm.

A team from the Annecy University Institute of Technology will attempt to apply the computer-honed model in person at the World Human Powered Speed Challenge in Nevada this September — after all, no matter how much computer assistance there is, as the name says, it’s still powered by a human.

Facebook’s new AI research is a real eye-opener

There are plenty of ways to manipulate photos to make you look better, remove red eye or lens flare, and so on. But so far the blink has proven a tenacious opponent of good snapshots. That may change with research from Facebook that replaces closed eyes with open ones in a remarkably convincing manner.

It’s far from the only example of intelligent “in-painting,” as the technique is called when a program fills in a space with what it thinks belongs there. Adobe in particular has made good use of it with its “context-aware fill,” allowing users to seamlessly replace undesired features, for example a protruding branch or a cloud, with a pretty good guess at what would be there if it weren’t.

But some features are beyond the tools’ capacity to replace, one of which is eyes. Their detailed and highly variable nature make it particularly difficult for a system to change or create them realistically.

Facebook, which probably has more pictures of people blinking than any other entity in history, decided to take a crack at this problem.

It does so with a Generative Adversarial Network, essentially a machine learning system that tries to fool itself into thinking its creations are real. In a GAN, one part of the system learns to recognize, say, faces, and another part of the system repeatedly creates images that, based on feedback from the recognition part, gradually grow in realism.

From left to right: “Exemplar” images, source images, Photoshop’s eye-opening algorithm, and Facebook’s method.

In this case the network is trained to both recognize and replicate convincing open eyes. This could be done already, but as you can see in the examples at right, existing methods left something to be desired. They seem to paste in the eyes of the people without much consideration for consistency with the rest of the image.

Machines are naive that way: they have no intuitive understanding that opening one’s eyes does not also change the color of the skin around them. (For that matter, they have no intuitive understanding of eyes, color, or anything at all.)

What Facebook’s researchers did was to include “exemplar” data showing the target person with their eyes open, from which the GAN learns not just what eyes should go on the person, but how the eyes of this particular person are shaped, colored, and so on.

The results are quite realistic: there’s no color mismatch or obvious stitching because the recognition part of the network knows that that’s not how the person looks.

In testing, people mistook the fake eyes-opened photos for real ones, or said they couldn’t be sure which was which, more than half the time. And unless I knew a photo was definitely tampered with, I probably wouldn’t notice if I was scrolling past it in my newsfeed. Gandhi looks a little weird, though.

It still fails in some situations, creating weird artifacts if a person’s eye is partially covered by a lock of hair, or sometimes failing to recreate the color correctly. But those are fixable problems.

You can imagine the usefulness of an automatic eye-opening utility on Facebook that checks a person’s other photos and uses them as reference to replace a blink in the latest one. It would be a little creepy, but that’s pretty standard for Facebook, and at least it might save a group photo or two.

Amazon starts shipping its $249 DeepLens AI camera for developers

Back at its re:Invent conference in November, AWS announced its $249 DeepLens, a camera that’s specifically geared toward developers who want to build and prototype vision-centric machine learning models. The company started taking pre-orders for DeepLens a few months ago, but now the camera is actually shipping to developers.

Ahead of today’s launch, I had a chance to attend a workshop in Seattle with DeepLens senior product manager Jyothi Nookula and Amazon’s VP for AI Swami Sivasubramanian to get some hands-on time with the hardware and the software services that make it tick.

DeepLens is essentially a small Ubuntu- and Intel Atom-based computer with a built-in camera that’s powerful enough to easily run and evaluate visual machine learning models. In total, DeepLens offers about 106 GFLOPS of performance.

The hardware has all of the usual I/O ports (think Micro HDMI, USB 2.0, Audio out, etc.) to let you create prototype applications, no matter whether those are simple toy apps that send you an alert when the camera detects a bear in your backyard or an industrial application that keeps an eye on a conveyor belt in your factory. The 4 megapixel camera isn’t going to win any prizes, but it’s perfectly adequate for most use cases. Unsurprisingly, DeepLens is deeply integrated with the rest of AWS’s services. Those include the AWS IoT service Greengrass, which you use to deploy models to DeepLens, for example, but also SageMaker, Amazon’s newest tool for building machine learning models.

These integrations are also what makes getting started with the camera pretty easy. Indeed, if all you want to do is run one of the pre-built samples that AWS provides, it shouldn’t take you more than 10 minutes to set up your DeepLens and deploy one of these models to the camera. Those project templates include an object detection model that can distinguish between 20 objects (though it had some issues with toy dogs, as you can see in the image above), a style transfer example to render the camera image in the style of van Gogh, a face detection model and a model that can distinguish between cats and dogs and one that can recognize about 30 different actions (like playing guitar, for example). The DeepLens team is also adding a model for tracking head poses. Oh, and there’s also a hot dog detection model.

But that’s obviously just the beginning. As the DeepLens team stressed during our workshop, even developers who have never worked with machine learning can take the existing templates and easily extend them. In part, that’s due to the fact that a DeepLens project consists of two parts: the model and a Lambda function that runs instances of the model and lets you perform actions based on the model’s output. And with SageMaker, AWS now offers a tool that also makes it easy to build models without having to manage the underlying infrastructure.

You could do a lot of the development on the DeepLens hardware itself, given that it is essentially a small computer, though you’re probably better off using a more powerful machine and then deploying to DeepLens using the AWS Console. If you really wanted to, you could use DeepLens as a low-powered desktop machine as it comes with Ubuntu 16.04 pre-installed.

For developers who know their way around machine learning frameworks, DeepLens makes it easy to import models from virtually all the popular tools, including Caffe, TensorFlow, MXNet and others. It’s worth noting that the AWS team also built a model optimizer for MXNet models that allows them to run more efficiently on the DeepLens device.

So why did AWS build DeepLens? “The whole rationale behind DeepLens came from a simple question that we asked ourselves: How do we put machine learning in the hands of every developer,” Sivasubramanian said. “To that end, we brainstormed a number of ideas and the most promising idea was actually that developers love to build solutions as hands-on fashion on devices.” And why did AWS decide to build its own hardware instead of simply working with a partner? “We had a specific customer experience in mind and wanted to make sure that the end-to-end experience is really easy,” he said. “So instead of telling somebody to go download this toolkit and then go buy this toolkit from Amazon and then wire all of these together. […] So you have to do like 20 different things, which typically takes two or three days and then you have to put the entire infrastructure together. It takes too long for somebody who’s excited about learning deep learning and building something fun.”

So if you want to get started with deep learning and build some hands-on projects, DeepLens is now available on Amazon. At $249, it’s not cheap, but if you are already using AWS — and maybe even use Lambda already — it’s probably the easiest way to get started with building these kind of machine learning-powered applications.

IBM launches deep learning as a service inside its Watson Studio

IBM’s Watson Studio is the company’s service for building machine learning workflows and training models, is getting a new addition today with the launch of Deep Learning as a Service (DLaaS). The general idea here, which is similar to that of competing services, is to enabled a wider range of businesses to make user of recent advances in machine learning by lowering the barrier of entry.

With these new tools, developers can develop their models with the same open source frameworks they are likely already using (think TensorFlow, Caffe, PyTorch, Keras etc.). Indeed, IBM’s new service essentially offers these tools as cloud-native services and developers can use a standard Rest API to train their models with the resources they want — or within the budget they have. For this service, which offers both a command-line interface, Python library or interactive user interface, that means developers get the option to choose between different Nvidia GPUs, for example.

The idea of a managed environment for deep learning isn’t necessarily new, With the Azure ML Studio, Microsoft offers a highly graphical experience for building ML models, too, after all. IBM argues that its service offers a number of distinct advantages, though. Among other things, the service offers a drag-and-drop neural network builder that allows even non-programmers to configure and design their neural networks.

In addition, IBM’s tools will also automatically tune hyperparameters for its users. That’s traditionally a rather time-consuming processes when done by hand and something that sits somewhere between art and science.

Primer helps governments and corporations monitor and understand the world’s information

 When Google was founded in 1998, its goal was to organize the world’s information. And for the most part, mission accomplished — but in 19 years the goal post has moved forward and indexing and usefully presenting information isn’t enough. As machine learning matures, it’s becoming feasible for the first time to actually summarize and contextualize the world’s… Read More

Powered by WPeMatico

Crunch Report | David Letterman Is Coming to Netflix

David Letterman is coming to Netflix, Didi Chuxing backs Careem in the Middle East, Crusie is running an autonomous ride-hailing service and Andrew Ng launches Deeplearning.ai. All this on Crunch Report. Read More

Powered by WPeMatico

After beating the world’s elite Go players, Google’s AlphaGo AI is retiring

 Google’s AlphaGo — the AI developed to tackle the world’s most demanding strategy game — is stepping down from competitive matches after defeating the world’s best talent. The latest to succumb is Go’s top-ranked player, Ke Jie, who lost 3-0 in a series hosted in China this week. The AI, developed by London-based DeepMind, which was acquired by Google… Read More

Powered by WPeMatico

Cognitiv+ is using AI for contract analysis and tracking

 Another legal tech startup coming out of the UK: Cognitiv+ is applying artificial intelligence to automate contract analysis and management, offering businesses a way to automate staying on top of legal risks, obligations and changing regulatory landscapes. Read More

Powered by WPeMatico

Goodyear’s AI tire concept can read the road and adapt on the fly

 Goodyear is thinking ahead to how tires – yes, tires – might change as autonomous driving technology alters vehicle design, and as available technologies like in-vehicle and embedded machine learning and AI make it possible to do more with parts of the car that were previously pretty static, like its wheels. Its new Eagle 360 Urban tire concept design builds on the work it… Read More

Powered by WPeMatico

Not another AI post

75fondo011 This post is about a better world brought by human ingenuity. It’s about a human opportunity, an invitation to founders and investors in advanced economies to come and help us change the lives of billions of humans. Come join the movement to help mankind move forward for a better, fairer future. It’s time! Read More

Powered by WPeMatico

Not another AI post

75fondo011 This post is about a better world brought by human ingenuity. It’s about a human opportunity, an invitation to founders and investors in advanced economies to come and help us change the lives of billions of humans. Come join the movement to help mankind move forward for a better, fairer future. It’s time! Read More

Powered by WPeMatico

Super Smash Borg Melee: AI takes on top players of the classic Nintendo fighting game

smashbros You can add the cult classic Super Smash Bros Melee to the list of games soon to be dominated by AIs. Research at MIT’s Computer Science and Artificial Intelligence Laboratory has produced a computer player superior to the drones you can already fight in the game. It’s good enough that it held its own against globally-ranked players. Read More

Powered by WPeMatico

Gamalon leverages the work of an 18th century reverend to organize unstructured enterprise data

Red and white dice casting shadows on grey surface It’s hard to fathom that the work of Reverend Thomas Bayes is still coming back to drive cutting edge advancements in AI, but that’s exactly what’s happening. DARPA-backed Gamalon is the latest carrier of the Bayesian baton, launching today with a solution to help enterprises better manage their gnarly unstructured data.
The world of enterprise is full of unstructured data. Read More

Powered by WPeMatico

How Facebook plans to evaluate its quest for generalized artificial intelligence

Man With Circuit Board Brain One of the biggest misconceptions about artificial intelligence is the belief that today’s AIs possess generalized intelligence. We are really good at leveraging large datasets to accomplish specific tasks, but fall flat at replicating the breath of human intelligence. If we’re going to move towards generalized intelligence, Facebook wants to make sure we know how to… Read More

Powered by WPeMatico

The sound of impending failure

machine-learning-sound If we can find a way to automate listening itself, we would be able to more intelligently monitor our world and its machines day and night. We could predict the failure of engines, rail infrastructure, oil drills and power plants in real time — notifying humans the moment of an acoustical anomaly. This has the potential to save lives, but despite advances in machine learning, we… Read More

Powered by WPeMatico

Putting the “intelligent” machine in its place

South Korean Go game fans watch a television screen broadcasting live footage of the Google DeepMind Challenge Match, at the Korea Baduk Association in Seoul on March 9, 2016. 
A 3,000-year-old Chinese board game was the focus of a very 21st century showdown as South Korean Go grandmaster Lee Se-Dol kicked off his highly anticipated clash with the Google-developed supercomputer, AlphaGo.         (Photo: JUNG YEON-JE/AFP/Getty Images) Sometimes even just defining the problem you’re trying to solve is the hardest part. We need human intelligence to decide how and when to use machine intelligence, and the more sophisticated the uses we make of machine intelligence, the more critically we need human intelligence to ensure it’s deployed sensibly and safely. Read More

Powered by WPeMatico

Using data science to beat cancer

Dividing cancer cell. Coloured scanning electron micrograph (SEM) of a colorectal cancer cell undergoing mitosis (nuclear division) and splitting into two daughter cells (left and right). Here, it is in late telophase, the final stage before cell division (cytokinesis) and the two daughter cells are still connect by a cytoplasmic bridge (horizontal, centre). Bacteria (rod-shaped) can also be seen on the cells. Magnification: x2000 when printed at 10 centimetres wide. Image: STEVE GSCHMEISSNER/Science Photo Library
/Getty Images The complexity of seeking a cure for cancer has vexed researchers for decades. While they’ve made remarkable progress, they are still waging a battle uphill as cancer remains one of the leading causes of death worldwide. Yet scientists may soon have a critical new ally at their sides — intelligent machines — that can attack that complexity in a different way. Read More

Powered by WPeMatico