Ben Watson Talks about C#, WCF, Manycore, and Big O
Larry O’Brien: Describe your book’s structure and goals a little bit: when do you see a developer picking up your book and going “ah, this is just what I needed!”?
Ben Watson: The book is structured as a list of topics that provide short, concise examples of how to accomplish a specific task. Each section discusses the features briefly and why and when you'll need to use them, as well as mentioning important "gotchas" or other tips that are helpful. I think in many ways the tips are what make the book unique and valuable. One of my goals was to provide lots of these handy tips that I've picked up throughout the years.
Also, with this concise format, a lot of ground can be covered. While encyclopedic reference-style books are invaluable, sometimes you just need to see a quick example to teach or remind you how to do something. This format allows you to easily find and digest topics so you can put them to use very quickly.
I think the book will appeal to developers in a few situations:
- The new developer who is getting started with C# and is trying to digest the language references, but needs to know more practical tips than those types of books provide.
- The developer who has a bit of experience, but want to get more insights into new features, or general tips they might not have come across yet.
- The developer who knows what they need to do, but not necessarily the best way to do it.
When a developer picks up the book and sees the breadth of topics, and all the practical, real-world advice about their use, then I think they will see this book as a valuable addition to their library.
Larry: Although C# can generally “get the job done” using purely managed code, you have sections on pointers and unsafe code, using UAC to elevate privileges, using P/Invoke, memory-mapped files, and so forth. Is it fair to say that performance is a major concern of yours?
Ben: Performance has definitely been a focus throughout my career, but I don't necessarily think it's the most important thing in most situations. Nearly all of my own .NET programming has been in pure, managed C#, but when the time comes to extend the capabilities a little more, it's important to know what's out there, and take advantage of it correctly. Knowing the OS is part of that, knowing about pointers is part of that (and knowing why .NET has tried to hide all of that gunk). Knowing about UAC is required learning for anyone programming on Windows these days: we don't need more developers being lazy and forcing the user to run as admin. Those days are past. There will always be capabilities of the OS that are beyond the managed framework, and it's nice to be able to access them from managed applications.
For a practical example, take bitmap smoothing, something I had to do in C# once. It's out of the question to do this with GetPixel() and SetPixel(). To do this in reasonable time, you have to understand the underlying byte format and access it natively.
Another reason I wanted to cover all that is because it's worse if people see these features and don't realize the damage they can do by using them poorly. I hope my tips at least get them thinking that.
Larry: You mention Stack Overflow in your introduction. Your book presents problems and answers that might seem at first glance similar to what you could find on SO. Can you describe Stack Overflow and how you see it complementing or competing with books such as your own?
Ben: Tech books these days have to compete a lot with the whole Internet. StackOverflow.com is one of the better recent forums for getting questions answered, but it's by no means the only one.
The advantage of a book over the web in general is that the book is a lot more information-dense. Each page can give you a wealth of information that might be, for example, spread out over a dozen blogs. It can take years to learn best practices, come up with your own tips, and learn the ins and outs of a technology. This "meta-knowledge" often doesn't get fully expressed in Q&A sites or blog posts (except perhaps one topic at a time). It is more efficient to curate that type of information into a single volume and save everybody time and frustration looking for answers that don't have obviously-worded questions.
Often, you need a basic level of understanding before being able to ask the right questions, and books will probably still be one of the best ways to ingest that needed information for a while to come.
I think another advantage of books is that they can force you to ask questions you haven't thought of, but should. On the Internet, it's so easy to filter out everything except what you're exactly looking for, even if what you're looking for isn't what you exactly need.
There is also the issue of trust. I don't think a book is inherently more trustworthy than the Internet, but many forums on the Internet have strange, inefficient, or just plain wrong solutions to some problems. StackOverflow deals with that fairly effectively with its rating and ranking system, but not all sites do this well. With a book, while not perfect by any means, there is at least an editing and streamlining process that goes on.
Larry: The first two words after the front material are “Type Fundamentals.” Gee whiz, isn’t that whole emphasis on explicit typing and structures old-fashioned? Why is it important for a developer to engage with types (other than just getting the compiler to stop complaining)?
Ben: We've come a long way, but I believe the better developers are the ones who understand not only where we are, but where we've come from, where we're going, and why. You cannot be an effective C# programmer without knowing the various ways to program in the language, and that includes types. Different ways of typing are useful for different situations. There are many things to consider: readability, maintainability, safety, extensibility, and more.
These basic concepts are also important to developers transitioning from other C-based languages like Java and C++, where subtle syntax and semantic differences can really bite you if you're not careful.
The dynamic features of C# are really cool, but they're not all that common yet either. You also don't want to overdo it and harm performance or readability.
Larry: What’s the difference between type inference, dynamic typing, and what you have in untyped language?
Ben: Type inference is when you want to be lazy and avoid banging out the type name (especially if it's really long). It tells basically the compiler: "substitute 'var' with whatever type name I mean." In addition, it's useful when using LINQ where the type you get back might be generated by the compiler itself, so your only choice is to use var (or object, but that's ugly).
Dynamic typing tells the compiler to wait until runtime for type resolution. These are still strongly-typed objects, however. Dynamic types are especially useful when consuming COM objects, when you don't really know the type, or what methods it has, at compile time. At compile time, all method s are assumed to be valid, and if they're not, you only find out at runtime.
Untyped languages allow you to assign any value to any variable throughout the execution of the program. C# does not have this ability, unlike, say, JavaScript.
Larry: When we get to things like covariance, contravariance, and tuples, are we talking about things that are important to the average developer or are these things that you can probably ignore unless you’re designing a framework for public use or somesuch?
Ben: Contravariance and covariance are big, ugly words most people haven't dealt with since senior-level CS classes (if then), but they really are all about making things work as you assume they should in the first place. In some ways, figuring out what they mean is harder than actually making use of them. I don't spend a lot of time on these concepts, but I think an explanation of what they mean and why they're needed is definitely warranted. It's one of those features that if you know need it, you're glad it's there.
Larry: You have a table of Big O notations on the first page of your chapter on collections, which might count as ‘enough said’ for some people. What is Big O?
Ben: Big O notation is a handy shorthand notation for describing the performance of algorithms, usually in terms of either speed or memory. For example, given an array of n unsorted elements, to find a specific value will require, on average n/2 comparisons. For Big O, we generalize by throwing away the 1/2 and are left with n, so we say that linear search has complexity of O(n) ("Big O of n"), or in other words that the time to run this algorithm is proportional to n.
In memory terms, suppose we have an input n which creates a 2 D matrix of n x n size. The memory usage can be described as O(n2), i.e., memory usage is proportional to n2.
Big O analysis is really only useful for large values of n, where the differences between algorithms can have an enormous impact on performance. For small values, it doesn't really matter whether you choose an algorithm that is O(n log n) or O(2n). However, when n is a million, then that difference decides whether you can run the program at all.
In my work at Bing, we measure time in milliseconds and datasets in gigabytes. When implementing new algorithms, especially those that have to run per-query, we do this analysis as a matter of course since it's so critical. On the other hand, in my personal projects I don't usually consider Big O unless I notice a performance problem.
In the book, I wanted to at least call out these Big O specifications since they are part of the specification given by the .NET team. Often, you need to consider your application's usage pattern: are you doing frequent inserts? Do you build the dataset once and just need to do quick lookups? Knowing these can help you decide which type of collection to use.
Larry: What kind of magnitude of data do you have to have on today’s machines for Big O issues to become important?
Ben: I would say for most programs, it doesn't really matter, other than pride in your craft, in creating an efficient solution to a problem. Machines are so fast that programmers can afford to be a little lazy (not that I'm advocating such an approach!).
However, even though our machines are many orders of magnitude faster than those of even a few years ago, fast algorithms and compact datasets are still important. In fact, I would say in some sense they're more important, because as our datasets get bigger, our algorithms need to keep pace. As a simple example, if processor speeds and a dataset size doubles every year, but you're using a O(n2) algorithm to process that dataset, then your program will not be able to keep up with the expanding data. On the other hand, a O(n log n) algorithm would easily be able to maintain its performance over time.
Larry: There are lots of options for interprocess and intermachine messaging — what does Windows Communication Foundation (WCF) bring to the table?
Ben: There is a lot to WCF, but I think there are two amazing things it does, that are important: 1) Unify lots of different wire protocols under the same interface, and 2) Hide the complexity of those protocols from you. This frees you from a lot of the inherent difficulties of network transmission and allows you to even switch the protocols as needed with just a configuration change. It essentially allows developers to continue using objects in all their programming, allowing WCF to figure out how to transmit an object to another computer.
On top of just data transmission, though, WCF adds security, logging, auditing, extensibility, and all those other so-called "extras" that are often neglected in applications, but shouldn't be.
Larry: Why should a developer learn WCF if they already have experience with other messaging techniques?
Programming at a low level with a specific messaging technique is becoming less useful in today's connected world. Software really needs the ability to adapt to the type of network it's on, the presence of firewalls and routers, dynamic discoverability, different types of protocols, authentication, and more. WCF allows you to take what you already know, wrap it in a good framework and then make even more out of it.
Larry: Does anyone actually ever use runtime Web Service discovery?
Ben: I think it's a pretty cool feature. It's useful in an enterprise where services can come and go. It's definitely something you'd want to be careful of, though. For example, make sure you have some kind of security protocol so you don't allow rogue endpoints to spring up!
Larry: You have a chapter on concurrent programming, which is enough of an opening for me to ask my obligatory question: How do you think the mainstream is going to react to the manycore world? Do you think our current mainstream models are adequate, are you a believer in STM or other model, or do you think we still are in the dark?
Ben: I think we're probably doing the best we can right now, but it's still quite rudimentary. There is a lot of research in this area, which is vital. Automatically parallelizing work is something we will have to figure out some day soon and I think we're still largely in the dark on a lot of it.
Sometimes I think the only reason we're entering a manycore world now is because we started running into practical and physical limits on a single processor. The only thing hardware manufacturers could do to progress is just add more processors. Thankfully, we already had the concept of threads, so this level of concurrency was a natural evolution, and could at least split up the OS and applications onto different cores, but that's not good enough for the long term I fear.
Part of the problem is that we still build software at too low of an abstraction. We're still very much concerned with processors, and therefore threads, synchronization, shared memory access, and all that awful stuff that make it impossible for us to wrap our heads around it correctly. In other words, though we think we're programming in a high-level language, we're still programming to the hardware! Decades of advances, layering abstractions from bit switches up to virtual machines, and we're still burrowing through all of that to touch the bare metal!
C#'s new Parallel Task Library is an interesting evolutionary step because it does hide these lower layers from you and the only abstraction you deal with is the method or loop. But it's still just a beginning. This is the area where I would most like to see advances, but I don't know if C# is going to be the language to do all of it.
I don't know the solution to this, but I think it involves some kind of magical, standardized, high-level programming model that can efficiently parallelize itself across as many cores as it needs. I have no idea what that looks like, but I think we'll get there someday. Maybe we need advances in AI, quantum physics, biology, or more esoteric sciences in order to get there.
Larry: What features or programming models would you like to see in future versions in C#?
Ben: I'm not one who necessarily wants a lot more language features, so I think any new features should be very well-considered and widely applicable (which I know they do).
However, continuing on the theme of the previous question I would love to see more work on the Parallel Task Library, and perhaps some language feature to add inherent parallelism to code in a declarative way.
As far as the .NET Framework as a whole, I think we're slowly headed there, but it would nice to see .NET considered a primary interface to the Windows API. I think we're getting closer with various additional code libraries, but it still seems like there are many things that are only possible by dropping into native code.