- Visual Studio .NET
- Coping During Development
- Storing Stuff
- Summary
Coping During Development
One of the biggest problems—and the single biggest benefit—of developing software under a LUA is that you have to be prepared to deal with code access security (CAS). CAS is the feature of the .NET Common Language Runtime (CLR) that assigns permissions to code, rather than relying on the security context of the user under which the code is running. For example, code run from the Internet has severely restricted permissions even if the user has administrative privileges. CAS doesn't override resource protections in Windows; it sits on top of them.
The good news is that you'll be much better prepared to avoid the LUA bugs I wrote about in part 4 and to handle issues of partially trusted assemblies. One of the problems with .NET development as done in VS is that Microsoft wanted development to be easy, so they made writing Full Trust applications the default. You have to do extra work to write a partially trusted ASP.NET or Windows Forms application, so most developers take the path of least resistance and write Full Trust apps. Full Trust essentially means that the CLR ignores permissions checking, so developers don't have to write any security code. And the result is insecure applications that are easy to attack and use as a way into a user's machine.
The goal is to write partially trusted applications, with the lowest level of permissions you can possibly require. How you do that is way beyond the scope of this article or series, but one thing you'll need to do is be prepared to handle the AllowPartiallyTrustedCallers attribute when using strongly named assemblies. You'll need to learn how to demand permissions so that code that calls your code has permission to access the protected resources your code uses, or the resources used by the code it calls, on down the call stack. In ASP.NET applications, you'll also need to look carefully at the permissions provided by the various trust levels and pick the most appropriate trust level; or, better yet, create a custom trust level that has exactly the permissions your application needs. This is a complex topic, but one that you need to understand if you're going to write secure applications.
You'll need to reconsider your deployment strategies as well, particularly if you use COM components in your application. For example, try to install COM components using Microsoft Installer (MSI) instead of shelling out to run regsvr32.exe. That way, you can install the component for one user rather than all, narrowing the component's potential use as an attack vector. You can also hook into MSI's self-repairing features.
Another deployment issue: It's quite common for an application to be installed by one user and used by another, or installed using an admin account and run under a mere user account. This means that you can't install per-user settings at installation, because these will go into the profile of the installing user and won't be available to the real user. Instead, when the program runs, check whether the settings exist; if not, create them. That way, they'll be available for any user who uses your application on that machine. This also means that your application can be used by more than one user on a machine, so make sure you keep per-user settings in the user's profile rather than in common storage. During development, you can test these features using Run As with your own application to run under different user profiles.