JavaScript: The Web Assembly Language?
Over the past few years, several people have referred to JavaScript as the web's assembly language. Not because it's a low-level language, but because it is increasingly the output from compilers rather than a language that people write directly. Google, for example, has a Java compiler that generates JavaScript, and I've written one for Objective-C. The Dart programming language is currently only implemented in a translator that generates JavaScript.
So how does JavaScript do in this regard? What would an ideal web assembly language look like?
It's worth noting that there are already several portable assembly-like languages. One of the first was P-Code, used by Pascal compilers, which was a stack-based virtual architecture that was intended to be easy to translate to native code. A more widespread example is JVM bytecode—inspired by Smalltalk bytecode—which is used by Java and other languages that run in a JVM. Android provides another, in the form of Dalvik.
Security
One of the big differences between something like JavaScript and P-Code is that JavaScript applications are inherently untrusted. If you take a photograph of a QR code, your smartphone will go to a web site, download all the JavaScript code there, and start running it—often without giving you the opportunity to check whether you trust the URL.
It is, therefore, very important that it is impossible—or as close to impossible as is actually feasible—for the code to do anything malicious, such as crash your computer (or even just the web browser) or leak private information to the Internet.
This is one of the main reasons why languages like Java and JavaScript do not allow pointer arithmetic and provide garbage collection. In a language such as C, it is very easy to create a pointer to a random bit of memory and dereference it, either crashing the program or accessing some data that, perhaps, that bit of the program should not touch.
How important is this now? It's worth remembering the computing landscape when Java and JavaScript were developed. Back then, Windows 3.11 was still a major player in both the home and corporate world, and a lot of web designers ran MacOS Classic (OS X was not yet released). In this world, a language could not rely on the operating system to provide protected memory. Unlike UNIX and Windows NT, these systems provided no protection between applications.
In this setting, garbage collection was vital. Not only could a memory bug in some untrusted code gain access to the browser but it could also gain access to the entire system.
Now, however, it is much less important. Systems such as FreeBSD's capsicum, SELinux, or the Darwin sandbox subsystem show that modern operating systems are perfectly capable of running totally untrusted machine code and strictly restricting what it can do. On FreeBSD, for example, it is only under a dozen lines of code to start an environment that can do nothing other than talk to a specific remote server and read and write files in a temporary directory.
In the modern world, the isolation features of a virtual execution environment are far less important. If anything, they are a limitation: The virtual machine code dramatically increases the size of the trusted computing base. Compare the number of privilege elevation vulnerabilities in any modern kernel (including Windows) with the number of sandbox-escaping vulnerabilities in something like Flash or the JVM over the past year, and you'll see a sharp contrast.