Software


Dell To Add Off-Host BIOS Verification To Endpoint Security Suite Enterprise

Dell To Add Off-Host BIOS Verification To Endpoint Security Suite Enterprise

At CES this year, Dell kind of broke from tradition and focused more on their business products. When I had a chance to talk to them, they were very enthusiastic about the fact that Dell is one of the few companies that does complete end to end solutions for the enterprise. Part of that end to end solution is Dell’s Endpoint Security Suite Enterprise, which includes data protection, authentication, and malware prevention.

A new feature coming to this suite is going to be BIOS verification. Dell found that there was a gap in the market with regards to securing the boot process. BIOS attacks are especially nasty, because they load up before the operating system and can more easily avoid detection. Most malware protection products focus on heuristics and virus signatures, but that landscape is changing with less mass targeting of malware and more directed attacks at specific companies, or even people. Dell’s Endpoint suite was recently updated to use Cylance as their anti-virus engine, and it uses machine learning which, according to Dell, can stop 99% of malware, even if it’s a zero-day or unknown exploit. Signature based detection is accurate 50% or less of the time, according to the same tests.

But all of that is to protect the operating system. If malware gets into the BIOS, it can be very difficult to detect. There are already methods to help deal with this – Microsoft Windows offers protection called Measured Boot which verifies the BIOS with help of the Trusted Platform Module. Dell wants to take this one step further, and remove the local host from the equation at all. Instead, Dell computers with the Endpoint Suite will be able to compare a SHA256 hash of the BIOS against a known good version kept on Dell’s servers. Since Dell is the one that originally creates the BIOS, they would be the authority to ensure that it has not been compromised.

Dell’s suite will perform a hash function on the BIOS, and send it to Dell. If the BIOS is found to have a non-matching return value, Dell’s servers will send an alert to the designated IT admins for the organization.

Dell’s Latitude 13 7000 will be available with BIOS Verification

Unlike Secure Boot, Dell’s solution does not actually stop the device from booting, or even alert the end user. The hashing and comparison is not done in real-time, but rather after the machine finishes booting, the Endpoint Suite will send it to Dell. Dell made it very clear that their intention was not to interfere with the device itself, but rather to give the IT admins notification of an issue so that they can deal with it through their own response and policy.

One obvious question I had to ask was if this same hashing could be done on a continuous basis, rather than just at boot, because the Endpoint Suite is what gathers the information and sends it to Dell. They were happy to let me know that a policy based scan of the BIOS is something they are working on, and they are hoping for it to be available in Q2 of this year. Scanning the BIOS every hour, or whatever is deemed a good time by the IT admins, would give them a leg up to catch the software before it even gets to go through a boot process and get itself into the system.

Dell has focused very much on being a one-stop shop for all of a companies computing needs, from servers, to desktops, to displays, and even services. This addition to their Enterprise Security Suite Enterprise will initially be available for Dell’s lineup of commercial PCs based on 6th generation Intel processors. They were keen to point out that BIOS attacks are not anywhere near as commonplace as traditional malware, but it is important to be out in front of these types of attacks.

Source: Dell

Windows 10 Feature Focus: .NET Native

Windows 10 Feature Focus: .NET Native

Programming languages seem to be cyclical. Low level languages give developers the chance to have very fast code with the minimum of commands necessary, but the closer you code to hardware the more difficult it becomes, and the developer should have a good grasp of the hardware in order to get the best performance. In theory, everything could be written in assembly language but that has some limitations. Programming languages, over time, have been abstracted from the hardware they run on which gives advantages to developers in that they do not have to micro-manage their code, and the code itself can be compiled against different architectures.

In the end, the processor just executes machine code, and job of moving from a developer’s mind to machine code can be done in a several ways. Two of the most common are Ahead-of-Time (AOT) and Just-in-Time (JIT) compilation. Each have their own advantages, but AOT can yield better performance because the CPU is not translating code on the fly. For desktops, this has not necessarily been a big issue since they generally have sufficient processing power anyway, but in the mobile space processors are much more limited in resources, especially power.

We’ve seen Android moving to AOT with ART just last year, and the actual compilation of the code is done on the device after the app is downloaded from the store. With Windows, apps can be written in more than just one language, and apps in the Windows Store can be written in C++, which is compiled AOT, as well as C# which runs as JIT code using Microsoft’s .NET framework, or even HTML5, CSS, and Javascript.

With .NET Native, Microsoft is now allowing C# code to be pre-compiled to native code, eliminating the need to fire up the majority of the .NET framework and runtime, which saves time at app launch. Visual Studio will now be able to do the compile to native code, but Microsoft is implementing quite a different system than Google has done with Android’s ART. Rather than having the developer do the compilation and upload to the cloud, and rather than have the code be compiled on the device, Microsoft will be doing the compilation themselves once the app is uploaded to the store. This will allow them to use their very well-known C++ compiler, and any tweaks they make to the compiler going forward will be able to be applied to all .NET Native apps in the store rather than having to get the developer to recompile. Microsoft’s method is actually very similar to Apple’s latest changes as well, since they will also be recompiling on their end if they make changes to their compiler.

They have added some pretty interesting functionality to their toolkit to enable .NET Native. Let’s take a look at the overview.

Starting with C# code, this is compiled into IL code using the same compiler that they use for any C# code running on the .NET framework as JIT code. The resulting Intermediate Language code is actually the exact same result as someone would get if they wanted to run the code as JIT. If you were not going to compile to native, you would stop right here.

To move this IL code to native code, next the code is run through an Intermediate Language Complier. Unlike a JIT compiler which runs on the fly when the code is running, the ILC can see the entire program and can therefore make larger optimizations than the JIT compiler would be able to do since it only sees a tiny portion of the code. ILC also has access to the .NET framework to add in the necessary code for standard functions. The ILC will actually create C# code for any WinRT calls made to avoid the framework having to be invoked during execution. That C# code is then fed back into the toolchain as part of the overall project so that all of these calls can be static calls for native code. The ILC then does transforms on any C# code that are required; C# code can rely on the framework for certain functions, so these need to be transformed since the framework will not be invoked once the app is compiled as native. The resulting output from the ILC is one file which contains all of the code, all of the optimizations, and any of the .NET framework necessary to run this app. Next, the single monolithic output is put through a Tree Shaker which looks at the entire program and determines what code is being used, and what is not, and expunging code that is never going to be called.

The resultant output from ILC.exe is Machine Dependent Intermediate Language (MDIL) which is very close to native code but with a light abstraction of the native code. MDIL contains placeholders which must be replaced before the code can be run by a process called binding.

The binder, called rhbind.exe, changes the MDIL into a final set of outputs which results in a .exe file and a .dll file. This final output is run as native code, but it still relies on a minimal runtime to be active which contains the garbage collector.

The process might seem pretty complicated, with a lot of steps, but there is a method to the madness. By keeping the developer’s code as an IL and treating it just like it would be non-native code, the debugging process is much faster since the IDE can run the code in a virtual machine with JIT and avoid having to recompile all of the code to do any debugging.

The final compile will be done in the cloud for any apps going to the store, but the tools will be available locally as well to assist with testing the app before it gets deployed to ensure there are no unseen consequences of moving to native code.

All of this work is done for really one reason. Performance. Microsoft’s numbers for .NET Native shows up to 60% quicker cold startup times for apps, and 40% quicker launch with a warm startup. Memory usage will also be lower since the operating system will not need to keep the runtime active at the same time. All of this is important for any system, but especially for low powered tablets and phones.

Microsoft has been testing .NET Native with two of their own apps. Both Wordament and Fresh Paint have been running as native code on Windows 8.1. This is the case for both the ARM and x86 versions of the app, and the compilation in the cloud ensures that the device downloading will be sure to get the correct executable for its platform.

Microsoft Fresh Paint

.NET Native has been officially released with Visual Studio 2015, and is available for use with the Universal Windows App platform and therefore apps in the store. At this time, it is not available for desktop apps, although that is certainly something that could come with a future update. It’s not surprising though, since they really want to move to the new app platform anyway.

There is a lot more under the covers to move from managed code to native code, but this is a high level overview of the work that has been done to provide developers with a method to get better performance without having to move away from C#.

Windows 10 Feature Focus: .NET Native

Windows 10 Feature Focus: .NET Native

Programming languages seem to be cyclical. Low level languages give developers the chance to have very fast code with the minimum of commands necessary, but the closer you code to hardware the more difficult it becomes, and the developer should have a good grasp of the hardware in order to get the best performance. In theory, everything could be written in assembly language but that has some limitations. Programming languages, over time, have been abstracted from the hardware they run on which gives advantages to developers in that they do not have to micro-manage their code, and the code itself can be compiled against different architectures.

In the end, the processor just executes machine code, and job of moving from a developer’s mind to machine code can be done in a several ways. Two of the most common are Ahead-of-Time (AOT) and Just-in-Time (JIT) compilation. Each have their own advantages, but AOT can yield better performance because the CPU is not translating code on the fly. For desktops, this has not necessarily been a big issue since they generally have sufficient processing power anyway, but in the mobile space processors are much more limited in resources, especially power.

We’ve seen Android moving to AOT with ART just last year, and the actual compilation of the code is done on the device after the app is downloaded from the store. With Windows, apps can be written in more than just one language, and apps in the Windows Store can be written in C++, which is compiled AOT, as well as C# which runs as JIT code using Microsoft’s .NET framework, or even HTML5, CSS, and Javascript.

With .NET Native, Microsoft is now allowing C# code to be pre-compiled to native code, eliminating the need to fire up the majority of the .NET framework and runtime, which saves time at app launch. Visual Studio will now be able to do the compile to native code, but Microsoft is implementing quite a different system than Google has done with Android’s ART. Rather than having the developer do the compilation and upload to the cloud, and rather than have the code be compiled on the device, Microsoft will be doing the compilation themselves once the app is uploaded to the store. This will allow them to use their very well-known C++ compiler, and any tweaks they make to the compiler going forward will be able to be applied to all .NET Native apps in the store rather than having to get the developer to recompile. Microsoft’s method is actually very similar to Apple’s latest changes as well, since they will also be recompiling on their end if they make changes to their compiler.

They have added some pretty interesting functionality to their toolkit to enable .NET Native. Let’s take a look at the overview.

Starting with C# code, this is compiled into IL code using the same compiler that they use for any C# code running on the .NET framework as JIT code. The resulting Intermediate Language code is actually the exact same result as someone would get if they wanted to run the code as JIT. If you were not going to compile to native, you would stop right here.

To move this IL code to native code, next the code is run through an Intermediate Language Complier. Unlike a JIT compiler which runs on the fly when the code is running, the ILC can see the entire program and can therefore make larger optimizations than the JIT compiler would be able to do since it only sees a tiny portion of the code. ILC also has access to the .NET framework to add in the necessary code for standard functions. The ILC will actually create C# code for any WinRT calls made to avoid the framework having to be invoked during execution. That C# code is then fed back into the toolchain as part of the overall project so that all of these calls can be static calls for native code. The ILC then does transforms on any C# code that are required; C# code can rely on the framework for certain functions, so these need to be transformed since the framework will not be invoked once the app is compiled as native. The resulting output from the ILC is one file which contains all of the code, all of the optimizations, and any of the .NET framework necessary to run this app. Next, the single monolithic output is put through a Tree Shaker which looks at the entire program and determines what code is being used, and what is not, and expunging code that is never going to be called.

The resultant output from ILC.exe is Machine Dependent Intermediate Language (MDIL) which is very close to native code but with a light abstraction of the native code. MDIL contains placeholders which must be replaced before the code can be run by a process called binding.

The binder, called rhbind.exe, changes the MDIL into a final set of outputs which results in a .exe file and a .dll file. This final output is run as native code, but it still relies on a minimal runtime to be active which contains the garbage collector.

The process might seem pretty complicated, with a lot of steps, but there is a method to the madness. By keeping the developer’s code as an IL and treating it just like it would be non-native code, the debugging process is much faster since the IDE can run the code in a virtual machine with JIT and avoid having to recompile all of the code to do any debugging.

The final compile will be done in the cloud for any apps going to the store, but the tools will be available locally as well to assist with testing the app before it gets deployed to ensure there are no unseen consequences of moving to native code.

All of this work is done for really one reason. Performance. Microsoft’s numbers for .NET Native shows up to 60% quicker cold startup times for apps, and 40% quicker launch with a warm startup. Memory usage will also be lower since the operating system will not need to keep the runtime active at the same time. All of this is important for any system, but especially for low powered tablets and phones.

Microsoft has been testing .NET Native with two of their own apps. Both Wordament and Fresh Paint have been running as native code on Windows 8.1. This is the case for both the ARM and x86 versions of the app, and the compilation in the cloud ensures that the device downloading will be sure to get the correct executable for its platform.

Microsoft Fresh Paint

.NET Native has been officially released with Visual Studio 2015, and is available for use with the Universal Windows App platform and therefore apps in the store. At this time, it is not available for desktop apps, although that is certainly something that could come with a future update. It’s not surprising though, since they really want to move to the new app platform anyway.

There is a lot more under the covers to move from managed code to native code, but this is a high level overview of the work that has been done to provide developers with a method to get better performance without having to move away from C#.

Windows 10 Feature Focus: .NET Native

Windows 10 Feature Focus: .NET Native

Programming languages seem to be cyclical. Low level languages give developers the chance to have very fast code with the minimum of commands necessary, but the closer you code to hardware the more difficult it becomes, and the developer should have a good grasp of the hardware in order to get the best performance. In theory, everything could be written in assembly language but that has some limitations. Programming languages, over time, have been abstracted from the hardware they run on which gives advantages to developers in that they do not have to micro-manage their code, and the code itself can be compiled against different architectures.

In the end, the processor just executes machine code, and job of moving from a developer’s mind to machine code can be done in a several ways. Two of the most common are Ahead-of-Time (AOT) and Just-in-Time (JIT) compilation. Each have their own advantages, but AOT can yield better performance because the CPU is not translating code on the fly. For desktops, this has not necessarily been a big issue since they generally have sufficient processing power anyway, but in the mobile space processors are much more limited in resources, especially power.

We’ve seen Android moving to AOT with ART just last year, and the actual compilation of the code is done on the device after the app is downloaded from the store. With Windows, apps can be written in more than just one language, and apps in the Windows Store can be written in C++, which is compiled AOT, as well as C# which runs as JIT code using Microsoft’s .NET framework, or even HTML5, CSS, and Javascript.

With .NET Native, Microsoft is now allowing C# code to be pre-compiled to native code, eliminating the need to fire up the majority of the .NET framework and runtime, which saves time at app launch. Visual Studio will now be able to do the compile to native code, but Microsoft is implementing quite a different system than Google has done with Android’s ART. Rather than having the developer do the compilation and upload to the cloud, and rather than have the code be compiled on the device, Microsoft will be doing the compilation themselves once the app is uploaded to the store. This will allow them to use their very well-known C++ compiler, and any tweaks they make to the compiler going forward will be able to be applied to all .NET Native apps in the store rather than having to get the developer to recompile. Microsoft’s method is actually very similar to Apple’s latest changes as well, since they will also be recompiling on their end if they make changes to their compiler.

They have added some pretty interesting functionality to their toolkit to enable .NET Native. Let’s take a look at the overview.

Starting with C# code, this is compiled into IL code using the same compiler that they use for any C# code running on the .NET framework as JIT code. The resulting Intermediate Language code is actually the exact same result as someone would get if they wanted to run the code as JIT. If you were not going to compile to native, you would stop right here.

To move this IL code to native code, next the code is run through an Intermediate Language Complier. Unlike a JIT compiler which runs on the fly when the code is running, the ILC can see the entire program and can therefore make larger optimizations than the JIT compiler would be able to do since it only sees a tiny portion of the code. ILC also has access to the .NET framework to add in the necessary code for standard functions. The ILC will actually create C# code for any WinRT calls made to avoid the framework having to be invoked during execution. That C# code is then fed back into the toolchain as part of the overall project so that all of these calls can be static calls for native code. The ILC then does transforms on any C# code that are required; C# code can rely on the framework for certain functions, so these need to be transformed since the framework will not be invoked once the app is compiled as native. The resulting output from the ILC is one file which contains all of the code, all of the optimizations, and any of the .NET framework necessary to run this app. Next, the single monolithic output is put through a Tree Shaker which looks at the entire program and determines what code is being used, and what is not, and expunging code that is never going to be called.

The resultant output from ILC.exe is Machine Dependent Intermediate Language (MDIL) which is very close to native code but with a light abstraction of the native code. MDIL contains placeholders which must be replaced before the code can be run by a process called binding.

The binder, called rhbind.exe, changes the MDIL into a final set of outputs which results in a .exe file and a .dll file. This final output is run as native code, but it still relies on a minimal runtime to be active which contains the garbage collector.

The process might seem pretty complicated, with a lot of steps, but there is a method to the madness. By keeping the developer’s code as an IL and treating it just like it would be non-native code, the debugging process is much faster since the IDE can run the code in a virtual machine with JIT and avoid having to recompile all of the code to do any debugging.

The final compile will be done in the cloud for any apps going to the store, but the tools will be available locally as well to assist with testing the app before it gets deployed to ensure there are no unseen consequences of moving to native code.

All of this work is done for really one reason. Performance. Microsoft’s numbers for .NET Native shows up to 60% quicker cold startup times for apps, and 40% quicker launch with a warm startup. Memory usage will also be lower since the operating system will not need to keep the runtime active at the same time. All of this is important for any system, but especially for low powered tablets and phones.

Microsoft has been testing .NET Native with two of their own apps. Both Wordament and Fresh Paint have been running as native code on Windows 8.1. This is the case for both the ARM and x86 versions of the app, and the compilation in the cloud ensures that the device downloading will be sure to get the correct executable for its platform.

Microsoft Fresh Paint

.NET Native has been officially released with Visual Studio 2015, and is available for use with the Universal Windows App platform and therefore apps in the store. At this time, it is not available for desktop apps, although that is certainly something that could come with a future update. It’s not surprising though, since they really want to move to the new app platform anyway.

There is a lot more under the covers to move from managed code to native code, but this is a high level overview of the work that has been done to provide developers with a method to get better performance without having to move away from C#.

Microsoft Office 2016 Is Released Worldwide Today

Microsoft Office 2016 Is Released Worldwide Today

Today Microsoft is launching a pretty important set of products. Office 2016 is now available worldwide, and marks the first major update to the suite since Office 2013 was released in January of 2013. While many people think Microsoft and Windows are…