Cameras

Auto Added by WPeMatico

Leica’s Q2 is a beautiful camera that I want and will never have

Leica is a brand I respect and appreciate but don’t support. Or rather, can’t, because I’m not fabulously rich. But if I did have $5,000 to spend on a fixed-lens camera, I’d probably get the new Q2, a significant improvement over 2015’s Q — which tempted me back then.

The Q2 keeps much of what made the Q great: a full-frame sensor, a fabulous 28mm F/1.7 Summilux lens, and straightforward operation focused on getting the shot. But it also makes some major changes that make the Q2 a far more competitive camera.

The sensor has jumped from 24 to 47 megapixels, and while we’re well out of the megapixel race, that creates the opportunity for a very useful cropped shooting mode that lets you shoot at 35, 50, and 75mm equivalents while still capturing huge pixel counts. It keeps the full frame exposure as well so you can tweak the crop later. The new sensor also has a super low native ISO of 50, which should help with dynamic range and in certain exposure conditions.

Autofocus has been redone as well (as you might expect with a new sensor) and it should be quicker and more accurate now. Ther’s also an optical stabilization mode that kicks in when you are shooting at under 1/60s. Both features that need a little testing to verify they’re as good as they sound, but I don’t expect they’re fraudulent or anything.

The body, already a handsome minimal design in keeping with Leica’s impeccable (if expensive) taste, is now weather sealed, making this a viable walk-around camera in all conditions. Imagine paying five grand for a camera and being afraid to take it out in the rain! Well, many people did that and perhaps will feel foolish now that the Q2 has arrived.

Inside is an electronic viewfinder, but the 2015 Q had a sequential-field display — meaning it flashes rapidly through the red, green, and blue components of the image — which made it prone to color artifacts in high-motion scenes or when panning. The Q2, however, has a shiny new OLED display with the same resolution but better performance. OLEDs are great for EVFs for a lot of reasons, but I like that you get really nice blacks, like in an optical viewfinder.

The button layout has been simplified as well (or rather synchronized with the CL, another Leica model), with a new customizable button on the top plate, reflecting the trend of personalization we’ve seen in high-end cameras. A considerably larger battery and redesigned battery and card door rounds out the new features.

As DPReview points out in its hands-on preview of the camera, the Q2 is significantly heavier than the high-end fixed-lens competition (namely the Sony RX1R II and Fuji X100F, both excellent cameras), and also significantly more expensive. But unlike many Leica offerings, it actually outperforms them in important ways: the lens, the weather sealing, the burst speed — it may be expensive, but you actually get something for your money. That can’t always be said of this brand.

The Leica Q2 typifies the type of camera I’d like to own: no real accessories, nothing to swap in or out, great image quality and straightforward operation. I’m far more likely to get an X100F (and even then it’d be a huge splurge) but all that time I’ll be looking at the Q2 with envious eyes. Maybe I’ll get to touch one some day.

Amazon starts shipping its $249 DeepLens AI camera for developers

Back at its re:Invent conference in November, AWS announced its $249 DeepLens, a camera that’s specifically geared toward developers who want to build and prototype vision-centric machine learning models. The company started taking pre-orders for DeepLens a few months ago, but now the camera is actually shipping to developers.

Ahead of today’s launch, I had a chance to attend a workshop in Seattle with DeepLens senior product manager Jyothi Nookula and Amazon’s VP for AI Swami Sivasubramanian to get some hands-on time with the hardware and the software services that make it tick.

DeepLens is essentially a small Ubuntu- and Intel Atom-based computer with a built-in camera that’s powerful enough to easily run and evaluate visual machine learning models. In total, DeepLens offers about 106 GFLOPS of performance.

The hardware has all of the usual I/O ports (think Micro HDMI, USB 2.0, Audio out, etc.) to let you create prototype applications, no matter whether those are simple toy apps that send you an alert when the camera detects a bear in your backyard or an industrial application that keeps an eye on a conveyor belt in your factory. The 4 megapixel camera isn’t going to win any prizes, but it’s perfectly adequate for most use cases. Unsurprisingly, DeepLens is deeply integrated with the rest of AWS’s services. Those include the AWS IoT service Greengrass, which you use to deploy models to DeepLens, for example, but also SageMaker, Amazon’s newest tool for building machine learning models.

These integrations are also what makes getting started with the camera pretty easy. Indeed, if all you want to do is run one of the pre-built samples that AWS provides, it shouldn’t take you more than 10 minutes to set up your DeepLens and deploy one of these models to the camera. Those project templates include an object detection model that can distinguish between 20 objects (though it had some issues with toy dogs, as you can see in the image above), a style transfer example to render the camera image in the style of van Gogh, a face detection model and a model that can distinguish between cats and dogs and one that can recognize about 30 different actions (like playing guitar, for example). The DeepLens team is also adding a model for tracking head poses. Oh, and there’s also a hot dog detection model.

But that’s obviously just the beginning. As the DeepLens team stressed during our workshop, even developers who have never worked with machine learning can take the existing templates and easily extend them. In part, that’s due to the fact that a DeepLens project consists of two parts: the model and a Lambda function that runs instances of the model and lets you perform actions based on the model’s output. And with SageMaker, AWS now offers a tool that also makes it easy to build models without having to manage the underlying infrastructure.

You could do a lot of the development on the DeepLens hardware itself, given that it is essentially a small computer, though you’re probably better off using a more powerful machine and then deploying to DeepLens using the AWS Console. If you really wanted to, you could use DeepLens as a low-powered desktop machine as it comes with Ubuntu 16.04 pre-installed.

For developers who know their way around machine learning frameworks, DeepLens makes it easy to import models from virtually all the popular tools, including Caffe, TensorFlow, MXNet and others. It’s worth noting that the AWS team also built a model optimizer for MXNet models that allows them to run more efficiently on the DeepLens device.

So why did AWS build DeepLens? “The whole rationale behind DeepLens came from a simple question that we asked ourselves: How do we put machine learning in the hands of every developer,” Sivasubramanian said. “To that end, we brainstormed a number of ideas and the most promising idea was actually that developers love to build solutions as hands-on fashion on devices.” And why did AWS decide to build its own hardware instead of simply working with a partner? “We had a specific customer experience in mind and wanted to make sure that the end-to-end experience is really easy,” he said. “So instead of telling somebody to go download this toolkit and then go buy this toolkit from Amazon and then wire all of these together. […] So you have to do like 20 different things, which typically takes two or three days and then you have to put the entire infrastructure together. It takes too long for somebody who’s excited about learning deep learning and building something fun.”

So if you want to get started with deep learning and build some hands-on projects, DeepLens is now available on Amazon. At $249, it’s not cheap, but if you are already using AWS — and maybe even use Lambda already — it’s probably the easiest way to get started with building these kind of machine learning-powered applications.

Camera lenses literally melted during the solar eclipse

TwitterFacebook

Will people ever learn?

A camera rental company found its cameras and lenses severely damaged after people took them to shoot the solar eclipse last month.

This, despite warning users not to point their cameras directly at the sun.

Online rental shop LensRentals told renters that solar filters had to be attached to lenses to protect them and camera sensors during the eclipse.

Naturally, some people didn’t listen.

Here are the results, from burnt shutter systems:

Image: lens rentals

To damaged sensors:

Image: lens rentals

This Nikon D500 saw its mirror melt: Read more…

More about Science, Cameras, Eclipse, Canon, and Lens

Powered by WPeMatico

Snap is developing drone for users to share overhead videos and photos: NYT report

One of the products that Snapchat owner Snap Inc. is developing as “a modern-day camera company” is a drone, reports the New York Times today.

Sources for this bold claim are “three people briefed on the project who asked to remain anonymous because the details are confidential.”

The drone would help users take videos and photographs from overhead, then share that visual data with Snap, and presumably, other users of the service.

Snap is scheduled to go public later this week in a long-anticipated IPO.
(more…)

Powered by WPeMatico

Sony stays in the picture with premium Xperia camera features

sonyxperiax2 For all of its consumer electronics prowess, mobile has always been a tough proposition for Sony. The Android market is overcrowded for one, and the company hasn’t really done all that much to set itself apart from the pack — save for one key saving grace: really great cameras. Imaging is once again the standout feature on the trio of Xperia handsets announced this morning at… Read More

Powered by WPeMatico

Canon's new cameras are perfect for Casey Neistat wannabes

TwitterFacebook

Thanks to popular YouTubers like Casey Neistat, vlogging is more popular than ever. 

With its three new cameras — the EOS M6, EOS T7i and EOS 77D — Canon’s fully embracing the vlogging movement with open arms.

What makes a great vlogging camera? There aren’t any official requirements, but unofficially, you want a camera with a touchscreen that flips out to face you so that you can see what’s in your shot, an autofocus system that’s super fast to focus on faces and subjects, plenty of dials and buttons for adjusting essential camera settings (i.e. exposure, color temperature and aperture), and a lens system with a library of nice glass. Read more…

More about Mirrorless, Cameras, Dslrs, Eos 77d, and Eos T7i

Powered by WPeMatico