Google’s plan to abandon the New Core mobile chip in the Pixel 5 has been thwarted.

Since the Pixel 2, Google has included its own companion chipset in its smartphones to improve photography and other features. However, Google appears to have ended the trend as it confirmed it had abandoned the New Core in its latest flagship smartphone, the Pixel 5.

Pixel Neural Core is a successor to Pixel Visual Core. Both chipsets are made by Google to improve photography. Especially on the nerve core, Google also uses the chip to speed up facial grabbing, Google Assistant and other new features on the Pixel 4.

However, Google’s Pixel “a” series basically proves that there is no need to add chips. The Pixel 3a and 4a are similar in shooting and processing speed to the Pixel 3 and Pixel 4, but surprisingly, homemade chipsets are missing from the Pixel 5’s spec list.

In an interview with Android Police, Google confirmed that both the Pixel 5 and pixel 4a 5G lack nerve cores. Google also noted that by optimizing the Pixel 5’s Snapdragon 765G to maintain camera performance “similar” to the Pixel 4.

Neither new phone has pixel Neural Core or face unlock.

Does this mean that pixel Neural Core is gone forever? Probably not. Google has said it will restore Soli in future hardware, so it’s safe to say that the nerve core will eventually return. It’s worth noting, however, that the two new Pixels still offer Titan M chips for security.

READ ALSO: This chip on Google’s phone may change the mobile photography industry.

Google’s last-generation phone, the Pixel 3, was hailed as the best camera phone for a reason. Because Google uses the software algorithms in its HDR plus package to process pixels, some of the most spectacular photos when combined with a bit of machine learning can come from phones with standard hardware.

To help with these algorithms, Google uses a dedicated processor called Pixel Visual Core, the chip we first saw on pixel 2 in 2017. This year, Google appears to have replaced Pixel VIsual Core with something called Pixel Core.

Google's plan to abandon the New Core mobile chip in the Pixel 5 has been thwarted.

According to user Chenjie Luo, the Pixel 1’s HDR plus runs on Qualcomm’s HVX accelerator because the HVX is not designed for image processing, so it’s slow. To achieve a fast remake user experience, google camera made an image cache, keeping every frame of the camera sensor in memory and queuing up for HDR. This way a generation of Pixel users won’t notice the time of HDR processing. But this approach raises a problem: third-party photo-sharing apps like Instagram need to be taken and shared, and users can’t wait a few seconds for HDR to finish sharing. So in a generation, HDR plus is a unique feature of the google camera app and cannot be used on third-party apps. For this reason, and the boss’s blood, it gave birth to Pixel Visual Core to hardware-accelerated HDR plus, so that third-party apps can instantly complete the HDR-plus calculation after processing photos. The results are stunning, especially when the front and rear scenes are very clear in high-light contrast, and there are no photos of the face being too dark. The reason this chip is is that 8 IPU cores are programmable, not exactly an ASIC, so there are many other scenarios. At that time, the original design of this chip is to be an all-round image processing chip, HDR plus is the first showcase.

The original Pixel Visual Core was designed to speed up Google’s HDR-plus image processing algorithms, which made the pixel 2 and Pixel 3 photos look great. It uses machine learning programs and so-called computational photography to intelligently fill less-than-perfect parts of photos. And it’s actually really good; it allows phones with off-the-go camera sensors to take better photos.

If we believe in pixel Core, the Pixel 4 will once again be vying for the top seat in smartphone photography.

Neural networks.

It appears that Google is using a chip modeled on neural network technology to improve image processing capabilities in its 2019 Pixel hand. Neural networks are what you might hear more than once or twice, but the concept is not often explained. Instead, it looks like some magic-like Google-grade computer. This is not the case, and the ideas behind neural networks can actually easily get your mind confused.

Neural networks are groups of algorithms modeled by the human brain. It’s not how the brain looks or even works, it’s how it processes information. Neural networks acquire sensory data and identify it through so-called machine perception, which collects and transmits data through external sensories such as machine sensors.

These data are numbers called vectors. All external data from the “real” world, including images, sounds, and text, is converted to vectors and classified as data sets. We can think of a neural network as an extra layer of things stored on a computer or phone that contains data about what it means — what it looks like, what it says, and when it happens. After you have established a directory, you can categorize and compare the new data.

A real example can make all this clearer. NVIDIA produces processors that are very good at running neural networks. The company spends a lot of time scanning and copying photos of cats into the network, and once completed, a cluster of computers over the neural network can identify the cat in any photo it contains. Small cats, big cats, white cats, printed cloth cats, even mountain lions or tigers are cats, mainly because neural networks have a lot of data about what cats are.

Given this example, it’s not hard to see why Google is taking advantage of this capability inside the phone. These neural cores, which can be linked to large amounts of data, will be able to recognize what the camera lens sees and then decide what to do. Perhaps the data about what they see and expect can be passed to the image processing algorithm. Alternatively, the same data can be entered into Assistant to identify sweaters or apples. Or maybe you can translate written text faster and more accurately than Google does now.

It’s easy to think that Google can design a small chip that interfaces with neural networks and image processors in phones, and it’s easy to understand why. We’re not sure exactly what pixel Neural Core is, or what it might be used for, but once it’s released “officially,” we’ll know more about the phone and its actual details.