Skip to main content

Intel adds computational intelligence to its vision of the future

Less than a month ago we discussed Intel's acquisition of InVision Biometrics, a company that developed 3D sensor technology for recognizing motion and gestures.  At the time we noted that the InVision Biometrics solution was a MEMS-based hardware solution, a good match for integration into Intel chips. We also noted that both Intel and Qualcomm are entering gesture recognition and other areas that until now have been software dominated, most likely intending to incorporate these areas into next-generation CPUs.

Today the news broke that Intel is opening a new research center in the area of computational intelligence.  They're funding this research center to the tune of $3 million per year for the next five years.

The news article includes the following statements from Intel that shed light on their plans:

The new institute will focus on technologies that serve as an infrastructure for intelligent thinking such as processing architectures and techniques for computerized systems that learn to process data from sensors and convert them to comprehensible information.

Intel VP and Microprocessor and Chip Development Group general manager Ron Friedman said, "We believe that sensory ability will be an integral part of future computer systems because mankind will take advantage of our systems in order to interpret received data."

It's not exactly clear yet what Intel's planning for the "computational intelligence" at this center, but it relates to sensor interpretation and it's targeting integration into CPUs and other chips.  This clearly continues the trend we saw earlier with Intel's and Qualcomm's acquisitions in gesture recognition.  Whether this area of technology will be incorporated into general-purpose or special-purpose chips remains to be seen.  Will gesture recognition and sensor interpretation be in future Intel CPUs, or will future devices have not only CPUs and GPUs but also SPUs?  (Yes, we know, the acronym SPU has been used before, but we think it's still available for "sensor processing unit" as they start to take off.)

And what kinds of devices is Intel targeting with their sensor interpretation and computational intelligence? Grizzly Analytics believes that this must be mobile devices of various sorts, where there's the most benefit of hardware implementation and the largest sales volume potential.

As a not-insignificant side-note, this research center is being created in Israel, the same start-up nation where InVision Biometrics and PrimeSense (makers of the gesture recognition in Microsoft's Kinect) were founded.  The other Israeli start-ups in the area, some of which we discussed here, are looking like better and better M&A targets.

Popular posts from this blog

Intel demos indoor location technology in new Wi-Fi chips at MWC 2015

Intel made several announcements  at MWC 2015, including a new chipset for wireless connectivity (Wi-Fi) in mobile devices. This new chipset, the 8270, include in-chip support for indoor location positioning. Below we explain their technology and show a video of it in action. With this announcement, Intel joins Broadcom, Qualcomm and other chip makers in moving broad indoor location positioning into mobile device hardware. The transition of indoor location positioning into chips is a trend identified in the newest Grizzly Analytics report on Indoor Location Positioning Technologies , released the week before MWC 2015. By moving indoor location positioning from software into hardware, chips such as Intel's enable location positioning to run continuously and universally, without using device CPU, and with less power consumption. Intel's technology delivers 1-3 meter accuracy, using a technique called multilateration, generating a new location estimate every second. While 1-

Robot Camera Foreshadows an Era of Location-Aware Electronics

A French company called Move 'N See produces a line of camera robots. Their devices act as a smart tripod, holding a video camera and automatically moving and zooming the camera as people of interest move around a site. The idea is simple but amazingly innovative. Photo selfies are easy to take, but video selfies are next to impossible. How can I video myself playing football or doing gymnastics, without setting the camera so far back as to be useless? Do spectators want to spend an entire sporting event carefully videoing their friend or relative moving around the field? Enter Move 'N See's "personal robot cameramen." Their devices aim, pan and zoom a video camera as one or more people move around an area. The people of interest wear armbands whose locations are tracked, enabling the camera controller to know where to aim the camera. The camera controller also includes enough smarts to adjust the camera smoothly and to capture multiple people evenly. T

Waze and Google Maps: A Quick Comparison

I've been a big Waze fan for years, relying on it to make my daily commute as quick as possible.  I try to never leave my hometown without checking Waze first to avoid getting stuck in traffic. For those of you who don't know about Waze, they basically crowd-source traffic information, learning where traffic is slow by measuring how fast their users are moving.  This traffic information is then used to route people in ways that will truly be fastest.  (Apple has reportedly licensed Waze data for their upcoming maps app.) Waze is used most heavily abroad, and is only recently building a following in the States.  (It was also just reviewed on the Forbes site .)  So on a recent trip to the States, I decided to compare Waze to the latest USA-based version of Google Maps for Android. In a nutshell, I reached three conclusions.  (1) Google's use of text-to-speech in their turn-by-turn directions is very nice.   (2) Google's got Waze beat in terms of explaining what