News


Intel Keynote at Computex 2014: 14nm Core-M, SoFIA, Devil’s Canyon, DC P3700 and RealSENSE

Intel Keynote at Computex 2014: 14nm Core-M, SoFIA, Devil’s Canyon, DC P3700 and RealSENSE

While we were unable to run a live blog of the Intel Keynote this year, there were still a number of interesting announcements made by Renée James, President of Intel.  First job of a Keynote is to explain part of the past and the future, and we were told that the scope of the ‘Internet of Things’ is predicted to be in the region of 50 billion units by 2020.  Intel’s specific focus in the semiconductor part of the equation Moore’s Law, and as a result they showed us the first device with a 14nm part, the ASUS Transformer T300 Chi that we saw yesterday at the ASUS Press Conference.  Jonney Shih from ASUS was on the Intel stage showcasing the T300 Chi.

The heart of the 14nm processors is under the new Core-M branding.  While clock speeds and core counts were not mentioned, the ‘Core’ part of the name means that this should be a Broadwell derived component.  Echoing what CEO Brian Krzanich said earlier in the year, Renée confirmed that Core-M would be in the hands of end-users by the holiday season.

Also on show were tablets based on the Core-M technology, in the 10W range but also fanless.

As part of Intel’s LTE strategy, Mr Shih also demoed the ASUS Transformer Pad 303 with integrated LTE, showing HD streaming of a film.

Intel’s SoFIA platform, the combined quad-core Atom and 3G modem for entry and value markets, will be shipping in Q4.  This is the primary purpose of the deal with Rockchip, with derivatives of SoFIA being sold by both parties.  Intel’s strategy in this is to get into more markets more quickly and spread the brand.

As an enthusiast, news about Devil’s Canyon being launched was expected, and Intel delivered a brief statement regarding the top SKU having four cores at 4 GHz, as well as the Pentium Anniversary model.  Details about these processors came through Intel’s PR channels, but we are still awaiting a retail date.   Review samples should be with us when we get back from Computex.

Another element to the presentation was the official launch of the Intel DC P3700, an enterprise SSD for datacenters.

Intel also showed their RealSENSE camera – a camera with a 60 fps depth map sensor to allow interaction in real time.  As part of the RealSENSE ecosystem, Intel is releasing an SDK kit with a camera and offering a $1m prize for the best app created with the device.  It actually looks like an upgraded kinect sensor.

The onscreen demo of RealSENSE was a laptop with the camera installed and an avatar moving in real time.  The software was programmed to track fifty different elements and muscles of the face, including the direction in which the eyes were looking.

Intel Keynote at Computex 2014: 14nm Core-M, SoFIA, Devil’s Canyon, DC P3700 and RealSENSE

Intel Keynote at Computex 2014: 14nm Core-M, SoFIA, Devil’s Canyon, DC P3700 and RealSENSE

While we were unable to run a live blog of the Intel Keynote this year, there were still a number of interesting announcements made by Renée James, President of Intel.  First job of a Keynote is to explain part of the past and the future, and we were told that the scope of the ‘Internet of Things’ is predicted to be in the region of 50 billion units by 2020.  Intel’s specific focus in the semiconductor part of the equation Moore’s Law, and as a result they showed us the first device with a 14nm part, the ASUS Transformer T300 Chi that we saw yesterday at the ASUS Press Conference.  Jonney Shih from ASUS was on the Intel stage showcasing the T300 Chi.

The heart of the 14nm processors is under the new Core-M branding.  While clock speeds and core counts were not mentioned, the ‘Core’ part of the name means that this should be a Broadwell derived component.  Echoing what CEO Brian Krzanich said earlier in the year, Renée confirmed that Core-M would be in the hands of end-users by the holiday season.

Also on show were tablets based on the Core-M technology, in the 10W range but also fanless.

As part of Intel’s LTE strategy, Mr Shih also demoed the ASUS Transformer Pad 303 with integrated LTE, showing HD streaming of a film.

Intel’s SoFIA platform, the combined quad-core Atom and 3G modem for entry and value markets, will be shipping in Q4.  This is the primary purpose of the deal with Rockchip, with derivatives of SoFIA being sold by both parties.  Intel’s strategy in this is to get into more markets more quickly and spread the brand.

As an enthusiast, news about Devil’s Canyon being launched was expected, and Intel delivered a brief statement regarding the top SKU having four cores at 4 GHz, as well as the Pentium Anniversary model.  Details about these processors came through Intel’s PR channels, but we are still awaiting a retail date.   Review samples should be with us when we get back from Computex.

Another element to the presentation was the official launch of the Intel DC P3700, an enterprise SSD for datacenters.

Intel also showed their RealSENSE camera – a camera with a 60 fps depth map sensor to allow interaction in real time.  As part of the RealSENSE ecosystem, Intel is releasing an SDK kit with a camera and offering a $1m prize for the best app created with the device.  It actually looks like an upgraded kinect sensor.

The onscreen demo of RealSENSE was a laptop with the camera installed and an avatar moving in real time.  The software was programmed to track fifty different elements and muscles of the face, including the direction in which the eyes were looking.

Some Thoughts on Apple’s Metal API

Some Thoughts on Apple’s Metal API

Though it seems like Apple’s hardware divisions can hardly keep a secret these days due to the realities of mass production, the same is fortunately not true for their software divisions. Broad strokes aside, Apple managed to pack in a number of surprises in their OS X and iOS presentations at WWDC yesterday, and there’s nothing that ended up being quite as surprising to me as the announcement of the Metal API for iOS.

Later this week Apple will be holding their Metal developers sessions, at which time we’ll hopefully get some further details on the API and just how Apple intends to have developers use it. In the meantime with the preliminary Metal programming guide posted over on Apple’s developer website, I wanted to spend a few minutes musing over yesterday’s announcement, how Apple ended up developing their own API, and what this may mean for users and game developers.

Why Low-Overhead APIs?

First and foremost, let’s quickly recap just what exactly Apple has announced. Metal is Apple’s forthcoming low-overhead/low-level graphics and compute API for iOS. Metal is primarily geared towards gaming on iOS, and is intended to offer better graphics performance than the existing OpenGL ES API by curtailing driver overhead and giving developers more direct control over the GPU.

As our regular readers are no doubt well aware, Metal is the latest in a wave of low-level graphics APIs to be introduced over the last year in the GPU space, joining the ranks of AMD’s Mantle and Microsoft’s DirectX 12. In the case of Metal, as has been the case of all of these APIs, the idea is rooted in the fact that while high level APIs provide a number of important features from libraries to hardware abstraction, the overhead from this functionality is not worth the benefits, especially in the hands of highly seasoned programmers who have the experience and the means to go close-to-metal and bang on the hardware directly. The situation facing developers in these cases is that at a time when GPU performance growth is rapidly outpacing CPU performance growth, the API and driver overhead has gone from problematic to intolerable, leading to developers wanting to access the hardware directly.


How The Low-Level Mantle API Benefitted DICE’s Frostbite Engine

Metal in turn is the API through which Apple will provide this access. By peeling back the driver and API stack to the bare minimum, developers get to tell the GPU exactly what they’re doing and how they want it done, bypassing large chunks of CPU-intensive code that would previously do this for the developer. Whenever we’re talking about these low-level APIs it’s important to note that they’re merely ways to improve efficiency and are not miracle workers, but when faced with the most applicable bottleneck, the draw call – what’s essentially a single function call for the GPU – the increase in throughput can be remarkable. We won’t spend too much more time on the why’s of Metal, as we’ve written much longer outlines on low-level APIs before that don’t need repeated here, but it’s important to establish a baseline for evaluating Metal.

Are SoCs Draw Call Limited?

Upon hearing Apple’s Metal announcement, perhaps the greatest surprise was that iOS developers were in a position where they needed and could benefit from a low-level API like Metal. In the PC space we’ve been seeing low-level APIs rolled out as a solution to the widening gap between CPU and GPU performance, however the SoC class processors in Apple’s iOS devices are a very different beast. As one would expect for a mobile product, neither the CPU nor the GPU is high performance by PC standards, so why should a low-level API be necessary.

The answer to that is that while SoCs are lower performance devices, the same phenomena that has driven low-level APIs on the PC has driven them on mobile devices, just on a smaller scale. GPU performance is outgrowing CPU performance on the SoC level just as it has been the PC level, and even worse, SoC class CPUs are so slow that even small amounts of driver overhead can have a big impact. While we take 4000 draw calls for granted on desktop hardware – overhead and all – it’s something of a sobering reminder that this isn’t possible on even a relatively powerful SoC like the A7 with OpenGL ES, and that it took Metal for Crytek to get that many draw calls in motion, never mind other CPU savings such as precompiled shaders. If Apple intends to further gaming on iOS (and all signs are that they do), then capable programmers are going to want low level GPU access to maximize their graphical quality, the same as they do on the desktop and on game consoles.


Apple Metal Thread Model (Note that no Apple SoC has more than 2 CPU cores yet)

Ecosystems & Portability

But on that note there’s quite a bit that goes into providing developers with these kinds of tools, which puts Apple in a very interesting position among hardware and OS vendors. Of the other low-level APIs we’ve seen so far – AMD’s Mantle and Microsoft’s DirectX 12 – the former is an API established by a hardware vendor who has to ride on top of other companies CPUs and OSes, and the latter is an OS vendor who has to ride on top of third party CPUs and GPUs. Apple on the other hand is in the enviable position of being as close as anyone can be to offering a fully vertical ecosystem. Apple designs their own CPUs, configures their own SoCs, and writes their own OS. The only portion of the chain that Apple doesn’t control is the GPU, and even then the company has exclusively used Imagination Technologies’ PowerVR GPUs for the last 7 years with no signs of this changing. So for all practical purposes Apple has a closed ecosystem that they control from top to bottom, and can design for accordingly.

A closed ecosystem in turn means that Apple can achieve a level of OS, hardware, and programming language integration that no one else can achieve. Metal doesn’t need to take into consideration any other GPU architectures (though Apple in all likelihood has left it generic enough to be portable if the situation arises) and the OS around it can be tailored to the API, rather than making the API fit within the confines of the OS. This doesn’t necessarily mean Apple is going to make significant use of this integration, but it will be interesting to see just what Apple does do with so much control.


A7 SoC Floorplan (Image Courtesy Chipworks)

Another interesting thing to see as Metal plays out is how Apple handles portability from OpenGL ES, that is if they try to handle it at all. On the whole, it’s accepted that a low-level API like Metal will have minimal portability from higher level languages such as OpenGL ES. The exception to this thus far has been that due to the fundamentally low level nature of shader programs, that shader programs have been more portable. In the case of AMD’s Mantle, for example, we have seen AMD specifically support DirectX’s shader language – HLSL – to make porting to Mantle easier. Shader programs are just one part of a bigger picture, but their growing complexity and low level nature means that there are still benefits to being able to port them among APIs even when the API commands themselves are not portable.

At least for the moment, Apple’s Metal programming guide makes no mention of porting from the existing OpenGL ES API. Looking at the Metal shader language and comparing it to the OpenGL ES shader language (GLSL ES), while it’s initially promising since both languages are based on C++, it’s also clear that for better or worse Apple hasn’t held back from eclipsing OpenGL ES here. Metal’s shader language is based on a newer version of C++, C++11, and consequently includes features not available in GLSL ES. Furthermore comparing the function libraries there are again a number of identical functions, and yet more functions that the two shader languages do not have in common. Portability out of Metal aside, it’s not at all clear whether GLSL ES shaders are meaningfully portable into Metal; if they aren’t then that means additional work for developers, a specific concern if Apple is trying to land console-like games for iOS devices. So it will be interesting to see how this plays out.

Of course Android portability is also going to raise a flag here, though at first glance it actually seems unlikely that this will be a concern. Without an equivalent API – and the OpenGL AZDO concept isn’t going to be fully applicable to OpenGL ES – the games that benefit the most from Metal are also the games least likely to be on Android, so while portability from Android looks far from easy, there also appears to be little need to handle it. Android portability would seem to be best handled by traditional porting methods using OpenGL ES, which retains its common API status and will be sufficient for the kinds of games that will run on both ecosystems.

Metal Computing

On a final note, while we’ve discussed graphics almost exclusively thus far, it’s interesting to note that Apple is pitching Metal as an API for GPU compute as well as graphics. Despite being one of the initial promoters of the OpenCL API, Apple has never implemented OpenCL or any other GPU compute API on iOS thus far, even after they adopted the compute-friendly PowerVR Rogue GPU for their A7 SoC. As a result GPU compute on iOS has been limited to what OpenGL ES can be coaxed into, which although not wholly incapable, it is an API designed for dealing with images as opposed to free form data.

The low-level nature of Metal on the other hand means that it’s a good (or at least better) fit for GPU computing, as the lack of abstraction for graphics makes it more capable of handling the workflows and data types of compute tasks. This is one area in particular where the Metal shader language being based on a subset of C++11 is a benefit to Apple, as it provides a solid foundation for writing compute kernels. None the less it remains to be seen just how adaptable Metal is – can it match the compute functionality of OpenCL 1.2 or even OpenGL 4.x compute shaders – but even if it’s only of limited use it means Apple is finally ready to approach GPU computing on iOS devices.

Some Thoughts on Apple’s Metal API

Some Thoughts on Apple’s Metal API

Though it seems like Apple’s hardware divisions can hardly keep a secret these days due to the realities of mass production, the same is fortunately not true for their software divisions. Broad strokes aside, Apple managed to pack in a number of surprises in their OS X and iOS presentations at WWDC yesterday, and there’s nothing that ended up being quite as surprising to me as the announcement of the Metal API for iOS.

Later this week Apple will be holding their Metal developers sessions, at which time we’ll hopefully get some further details on the API and just how Apple intends to have developers use it. In the meantime with the preliminary Metal programming guide posted over on Apple’s developer website, I wanted to spend a few minutes musing over yesterday’s announcement, how Apple ended up developing their own API, and what this may mean for users and game developers.

Why Low-Overhead APIs?

First and foremost, let’s quickly recap just what exactly Apple has announced. Metal is Apple’s forthcoming low-overhead/low-level graphics and compute API for iOS. Metal is primarily geared towards gaming on iOS, and is intended to offer better graphics performance than the existing OpenGL ES API by curtailing driver overhead and giving developers more direct control over the GPU.

As our regular readers are no doubt well aware, Metal is the latest in a wave of low-level graphics APIs to be introduced over the last year in the GPU space, joining the ranks of AMD’s Mantle and Microsoft’s DirectX 12. In the case of Metal, as has been the case of all of these APIs, the idea is rooted in the fact that while high level APIs provide a number of important features from libraries to hardware abstraction, the overhead from this functionality is not worth the benefits, especially in the hands of highly seasoned programmers who have the experience and the means to go close-to-metal and bang on the hardware directly. The situation facing developers in these cases is that at a time when GPU performance growth is rapidly outpacing CPU performance growth, the API and driver overhead has gone from problematic to intolerable, leading to developers wanting to access the hardware directly.


How The Low-Level Mantle API Benefitted DICE’s Frostbite Engine

Metal in turn is the API through which Apple will provide this access. By peeling back the driver and API stack to the bare minimum, developers get to tell the GPU exactly what they’re doing and how they want it done, bypassing large chunks of CPU-intensive code that would previously do this for the developer. Whenever we’re talking about these low-level APIs it’s important to note that they’re merely ways to improve efficiency and are not miracle workers, but when faced with the most applicable bottleneck, the draw call – what’s essentially a single function call for the GPU – the increase in throughput can be remarkable. We won’t spend too much more time on the why’s of Metal, as we’ve written much longer outlines on low-level APIs before that don’t need repeated here, but it’s important to establish a baseline for evaluating Metal.

Are SoCs Draw Call Limited?

Upon hearing Apple’s Metal announcement, perhaps the greatest surprise was that iOS developers were in a position where they needed and could benefit from a low-level API like Metal. In the PC space we’ve been seeing low-level APIs rolled out as a solution to the widening gap between CPU and GPU performance, however the SoC class processors in Apple’s iOS devices are a very different beast. As one would expect for a mobile product, neither the CPU nor the GPU is high performance by PC standards, so why should a low-level API be necessary.

The answer to that is that while SoCs are lower performance devices, the same phenomena that has driven low-level APIs on the PC has driven them on mobile devices, just on a smaller scale. GPU performance is outgrowing CPU performance on the SoC level just as it has been the PC level, and even worse, SoC class CPUs are so slow that even small amounts of driver overhead can have a big impact. While we take 4000 draw calls for granted on desktop hardware – overhead and all – it’s something of a sobering reminder that this isn’t possible on even a relatively powerful SoC like the A7 with OpenGL ES, and that it took Metal for Crytek to get that many draw calls in motion, never mind other CPU savings such as precompiled shaders. If Apple intends to further gaming on iOS (and all signs are that they do), then capable programmers are going to want low level GPU access to maximize their graphical quality, the same as they do on the desktop and on game consoles.


Apple Metal Thread Model (Note that no Apple SoC has more than 2 CPU cores yet)

Ecosystems & Portability

But on that note there’s quite a bit that goes into providing developers with these kinds of tools, which puts Apple in a very interesting position among hardware and OS vendors. Of the other low-level APIs we’ve seen so far – AMD’s Mantle and Microsoft’s DirectX 12 – the former is an API established by a hardware vendor who has to ride on top of other companies CPUs and OSes, and the latter is an OS vendor who has to ride on top of third party CPUs and GPUs. Apple on the other hand is in the enviable position of being as close as anyone can be to offering a fully vertical ecosystem. Apple designs their own CPUs, configures their own SoCs, and writes their own OS. The only portion of the chain that Apple doesn’t control is the GPU, and even then the company has exclusively used Imagination Technologies’ PowerVR GPUs for the last 7 years with no signs of this changing. So for all practical purposes Apple has a closed ecosystem that they control from top to bottom, and can design for accordingly.

A closed ecosystem in turn means that Apple can achieve a level of OS, hardware, and programming language integration that no one else can achieve. Metal doesn’t need to take into consideration any other GPU architectures (though Apple in all likelihood has left it generic enough to be portable if the situation arises) and the OS around it can be tailored to the API, rather than making the API fit within the confines of the OS. This doesn’t necessarily mean Apple is going to make significant use of this integration, but it will be interesting to see just what Apple does do with so much control.


A7 SoC Floorplan (Image Courtesy Chipworks)

Another interesting thing to see as Metal plays out is how Apple handles portability from OpenGL ES, that is if they try to handle it at all. On the whole, it’s accepted that a low-level API like Metal will have minimal portability from higher level languages such as OpenGL ES. The exception to this thus far has been that due to the fundamentally low level nature of shader programs, that shader programs have been more portable. In the case of AMD’s Mantle, for example, we have seen AMD specifically support DirectX’s shader language – HLSL – to make porting to Mantle easier. Shader programs are just one part of a bigger picture, but their growing complexity and low level nature means that there are still benefits to being able to port them among APIs even when the API commands themselves are not portable.

At least for the moment, Apple’s Metal programming guide makes no mention of porting from the existing OpenGL ES API. Looking at the Metal shader language and comparing it to the OpenGL ES shader language (GLSL ES), while it’s initially promising since both languages are based on C++, it’s also clear that for better or worse Apple hasn’t held back from eclipsing OpenGL ES here. Metal’s shader language is based on a newer version of C++, C++11, and consequently includes features not available in GLSL ES. Furthermore comparing the function libraries there are again a number of identical functions, and yet more functions that the two shader languages do not have in common. Portability out of Metal aside, it’s not at all clear whether GLSL ES shaders are meaningfully portable into Metal; if they aren’t then that means additional work for developers, a specific concern if Apple is trying to land console-like games for iOS devices. So it will be interesting to see how this plays out.

Of course Android portability is also going to raise a flag here, though at first glance it actually seems unlikely that this will be a concern. Without an equivalent API – and the OpenGL AZDO concept isn’t going to be fully applicable to OpenGL ES – the games that benefit the most from Metal are also the games least likely to be on Android, so while portability from Android looks far from easy, there also appears to be little need to handle it. Android portability would seem to be best handled by traditional porting methods using OpenGL ES, which retains its common API status and will be sufficient for the kinds of games that will run on both ecosystems.

Metal Computing

On a final note, while we’ve discussed graphics almost exclusively thus far, it’s interesting to note that Apple is pitching Metal as an API for GPU compute as well as graphics. Despite being one of the initial promoters of the OpenCL API, Apple has never implemented OpenCL or any other GPU compute API on iOS thus far, even after they adopted the compute-friendly PowerVR Rogue GPU for their A7 SoC. As a result GPU compute on iOS has been limited to what OpenGL ES can be coaxed into, which although not wholly incapable, it is an API designed for dealing with images as opposed to free form data.

The low-level nature of Metal on the other hand means that it’s a good (or at least better) fit for GPU computing, as the lack of abstraction for graphics makes it more capable of handling the workflows and data types of compute tasks. This is one area in particular where the Metal shader language being based on a subset of C++11 is a benefit to Apple, as it provides a solid foundation for writing compute kernels. None the less it remains to be seen just how adaptable Metal is – can it match the compute functionality of OpenCL 1.2 or even OpenGL 4.x compute shaders – but even if it’s only of limited use it means Apple is finally ready to approach GPU computing on iOS devices.

Google Begins Android 4.4.3 Rollout

Google Begins Android 4.4.3 Rollout

After initially posting factory images yesterday, Google has begun it’s OTA rollout of its latest iteration of Android 4.4 KitKat. Android 4.4 launched in October last year alongside Google’s newest smartphone, the Nexus 5. On the design front, Android 4.4 brought many refinements to the user interface by introducing the ability to enable translucency in areas such as the status bar and navigation buttons. In addition, the legacy blue parts of the interface like the status bar icons were replaced with more unified white elements. It also featured the new Google Experience Launcher which features much deeper Google Now integration with the ability to trigger voice search by saying “OK Google” on your homescreen.

Under the hood, Google brought about optimizations like ZRAM support which allows data for idle background applications to be stored in a compressed RAM partition to free up RAM for applications in use. A low-RAM API with more aggressive memory management to improve performance on devices with as little as 512MB of RAM was included as well. Google also introduced their new experimental Java runtime which they call ART. ART hopes to improve application performance over Android’s current Dalvik runtime, which uses just-in-time compilation, by using ahead-of-time compilation to compile Java bytecode into machine code at the time of install.

Shortly afterward, Google released version 4.4.1 and 4.4.2 which included substantial improvements to the Nexus 5 camera performance by focusing faster and having the camera software prefer faster shutter speeds. The algorithms for calculating white balance and color balance were also tweaked to address complaints about inaccurate color in captured images. Compatibility between the ART runtime and third party applications was also improved, along with many other bug fixes and security improvements.

Android 4.4.3 is mainly an update to fix outstanding bugs with Android Kitkat but there are a couple of tweaks to the user interface that come along with it as well.

One of the long awaited fixes of this release is for excessive battery drain that could occur when an application used a device’s camera as a result of a process called ‘mm-qcamera-daemon’ which controls the camera on Qualcomm-powered devices. After an app using the camera was closed the process would continue to run in the background and cause abnormally high CPU usage which resulted in increased battery drain and higher device temperatures than normal.

In terms of updates to the UI, Android 4.4.3 brings a new dialer application and changes to the people app. The new dialer features white keys with a different shade of blue that fits better with the overall design of KitKat itself. The black and turquoise of the dialer has seemed like a design outlier compared to Google’s new applications for quite some time now and it’s good to see Google continuing to unify the design of the Android OS. The new people application is mainly the same as its predecessor but it replaces the older grey contact photo icons for contacts you have not assigned a picture to with new colorful ones.

The update is currently known to be rolling out to Google’s 2013 Nexus 7 with LTE, and it should reach other Nexus devices that support Android KitKat which includes the Nexus 4, Nexus 5, Nexus 7 (2012/2013 WiFi), and Nexus 10 shortly. Google Experience devices should receive their updates to Android 4.4.3 in the near future. Google is yet to post a summary changelog for the update and all the various bug and security fixes it includes but when it becomes available it will be added here. As always, Google’s update rollouts are done in stages and it may take some time for your device to receive it.

Source: Google Nexus 7 OTA via Android Police