GCAM, the secret Mantra of Google Pixel Phones

Last year when Google launched it’s prime phone Google Pixel into the market everyone got amused by the camera quality it had. Photos were amazingly better than competitors like iPhone 7 and Sony high end phones. The photo quality was compatible enough with SLR based cameras photo quality.

So, what is the secret mantra of Google Pixel Phones? What makes Pixel phone photos this high end quality ?

Google recently revealed the details mentioning that it all about their application GCAM which is used in the Google pixel phones.

Watch here : Google Pixel Photos – GCAM & Camera sample Images

trending_news_gcam_googlepixelvsiphone7

The beginning of GCAM project

Google always comes up with X’s graduated projects and some will fail and some will go beyond the expectations. This was certainly the case for Gcam, the computational photography project that now powers the camera of the acclaimed Pixel phone, made by Google, as well as a range of other image processing products across Alphabet.

Gcam began in 2011 with the idea of capturing photos in real time. The head of X at that time was Sebastian Thrun, and he was searching for a camera that could live within Google Glass. Anyone from parents with small kids to doctors performing surgery could benefit from this feature. However in order for people to want to use it, Glass’s picture-taking capabilities needed to be on par with cellphone cameras, at the very least.

Problems with Google Glass

Glass presented a number of camera design challenges like the tiny camera and lens starved the image of light, it had a small image sensor relative to cell phones, which reduced low-light and dynamic range performance; and it had very limited compute and battery power.

In 2011, Marc changed team X  to Gcam

The reinvention

The team started to ask — what if we looked at this problem in an entirely new way? What if, instead of trying to solve it with better hardware, we could do it with smart software choices instead? One of the faculty member, Marc Levoy, in the Stanford Computer Science department at the time, and an expert in computational photography come up with a software-powered image capture and processing techniques.

In 2011, Marc the team X changed to known as Gcam. Their mission was to improve photography on mobile devices by applying computational photography techniques. On their hunt for a solution to some of the challenges presented by Glass, the Gcam team explored a method called image fusion, which takes a rapid sequence of shots and then fuses them to create a single, higher quality image. The technique allowed them to render dimly-lit scenes in greater detail, and mixed lighting scenes with greater clarity. This meant brighter, sharper pictures overall.

Image fusion debuted in Glass in 2013, and it quickly became clear that this technology could be applied to products beyond Glass. As people increasingly turned to their phones to capture and share important moments in their lives, the software powering these cameras needed to be able to produce beautiful images, regardless of the lighting. Gcam’s next iteration of image fusion, called HDR+, moved beyond Glass and launched within the Android camera app for the Nexus 5 and then Nexus 6 the following year. HDR+ renders scenes with mixed light. Some of the software smarts from the Gcam team are included in Lens Blur, a feature in the Google camera app, and the software that stitches together the panoramas for Jump’s 360˚Virtual Reality videos. HDR+ mixes short exposures with software that boosts the brightness of shadows so that the subject and the sky can be preserved.

DxOMark, declared that the Pixel camera was “the best smartphone camera ever made” in 2016.

GCAM launched into Google Pixel

Later by seeing the quality of GCAM, google made Gcam’s HDR+ technology as the default mode for the critically acclaimed Google Pixel phone. DxOMark, the industry standard for camera ratings, declared that the Pixel camera was “the best smartphone camera ever made” in 2016. Reflecting on the evolution of the project, Marc says, “It took five years to get it really right…and we’re grateful that X gave our team the long-term horizons and independence to make that happen.”

What is next for Gcam?

Marc, who began his career developing a cartoon animation system that was used by Hanna-Barbera, is excited about the future of the team. “One direction that we’re pushing is machine learning,” he explains. “There’s lots of possibilities for creative things that actually change the look and feel of what you’re looking at. That could mean simple things like creating a training set to come up with a better white balance. Or what’s the right thing we could do with the background — should we blur it out, should we darken it, lighten it, stylize it? We’re at the best place in the world in terms of machine learning, so it’s a real opportunity to merge the creative world with the world of computational photography.”

Whatever is next, it has to admit that GCAM changed the mobile photo application standard and the future is really looking good for Gcam.

 

 

Author: RoyalGuru


Catch all  latest trending news in the world at Royaltrendingnews.com


 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s