Designing Software by Crunching Knowledge
- Ingredients of Effective Modeling
- Knowledge Crunching
- Continuous Learning
- Knowledge-Rich Design
- Deep Models
A few years ago, I set out to design a specialized software tool for printed-circuit board (PCB) design. One catch: I didn't know anything about electronic hardware. I had access to some PCB designers, of course, but they typically got my head spinning in three minutes. How was I going to understand enough to write this software? I certainly wasn't going to become an electrical engineer before the delivery deadline!
We tried having the PCB designers tell me exactly what the software should do. Bad idea. They were great circuit designers, but their software ideas usually involved reading in an ASCII file, sorting it, writing it back out with some annotation, and producing a report. This was clearly not going to lead to the leap forward in productivity that they were looking for.
The first few meetings were discouraging, but there was a glimmer of hope in the reports they asked for. They always involved “nets” and various details about them. A net, in this domain, is essentially a wire conductor that can connect any number of components on a PCB and carry an electrical signal to everything it is connected to. We had the first element of the domain model.
Figure 1.1.
I started drawing diagrams for them as we discussed the things they wanted the software to do. I used an informal variant of object interaction diagrams to walk through scenarios.
Figure 1.2.
PCB Expert 1: The components wouldn't have to be chips.
Developer (Me): So I should just call them “components”?
Expert 1: We call them “component instances.” There could be many of the same component.
Expert 2: The “net” box looks just like a component instance.
Expert 1: He's not using our notation. Everything is a box for them, I guess.
Developer: Sorry to say, yes. I guess I'd better explain this notation a little more.
They constantly corrected me, and as they did I started to learn. We ironed out collisions and ambiguities in their terminology and differences between their technical opinions, and they learned. They began to explain things more precisely and consistently, and we started to develop a model together.
Expert 1: It isn't enough to say a signal arrives at a ref-des, we have to know the pin.
Developer: Ref-des?
Expert 2: Same thing as a component instance. Ref-des is what it's called in a particular tool we use.
Expert 1: Anyhow, a net connects a particular pin of one instance to a particular pin of another.
Developer: Are you saying that a pin belongs to only one component instance and connects to only one net?
Expert 1: Yes, that's right.
Expert 2: Also, every net has a topology, an arrangement that determines the way the elements of the net connect.
Developer: OK, how about this?
Figure 1.3.
To focus our exploration, we limited ourselves, for a while, to studying one particular feature. A “probe simulation” would trace the propagation of a signal to detect likely sites of certain kinds of problems in the design.
Developer: I understand how the signal gets carried by the Net to all the Pins attached, but how does it go any further than that? Does the Topology have something to do with it?
Expert 2: No. The component pushes the signal through.
Developer: We certainly can't model the internal behavior of a chip. That's way too complicated.
Expert 2: We don't have to. We can use a simplification. Just a list of pushes through the component from certain Pins to certain others.
Developer: Something like this?
[With considerable trial-and-error, together we sketched out a scenario.]
Figure 1.4.
Developer: But what exactly do you need to know from this computation?
Expert 2: We'd be looking for long signal delays—say, any signal path that was more than two or three hops. It's a rule of thumb. If the path is too long, the signal may not arrive during the clock cycle.
Developer: More than three hops.... So we need to calculate the path lengths. And what counts as a hop?
Expert 2: Each time the signal goes over a Net, that's one hop.
Developer: So we could pass the number of hops along, and a Net could increment it, like this.
Figure 1.5.
Developer: The only part that isn't clear to me is where the “pushes” come from. Do we store that data for every Component Instance?
Expert 2: The pushes would be the same for all the instances of a component.
Developer: So the type of component determines the pushes. They'll be the same for every instance?
Figure 1.6.
Expert 2: I'm not sure exactly what some of this means, but I would imagine storing push-throughs for each component would look something like that.
Developer: Sorry, I got a little too detailed there. I was just thinking it through. . . . So, now, where does the Topology come into it?
Expert 1: That's not used for the probe simulation.
Developer: Then I'm going to drop it out for now, OK? We can bring it back when we get to those features.
And so it went (with much more stumbling than is shown here). Brainstorming and refining; questioning and explaining. The model developed along with my understanding of the domain and their understanding of how the model would play into the solution. A class diagram representing that early model looks something like this.
Figure 1.7.
After a couple more part-time days of this, I felt I understood enough to attempt some code. I wrote a very simple prototype, driven by an automated test framework. I avoided all infrastructure. There was no persistence, and no user interface (UI). This allowed me to concentrate on the behavior. I was able to demonstrate a simple probe simulation in just a few more days. Although it used dummy data and wrote raw text to the console, it was nonetheless doing the actual computation of path lengths using Java objects. Those Java objects reflected a model shared by the domain experts and myself.
The concreteness of this prototype made clearer to the domain experts what the model meant and how it related to the functioning software. From that point, our model discussions became more interactive, as they could see how I incorporated my newly acquired knowledge into the model and then into the software. And they had concrete feedback from the prototype to evaluate their own thoughts.
Embedded in that model, which naturally became much more complicated than the one shown here, was knowledge about the domain of PCB relevant to the problems we were solving. It consolidated many synonyms and slight variations in descriptions. It excluded hundreds of facts that the engineers understood but that were not directly relevant, such as the actual digital features of the components. A software specialist like me could look at the diagrams and in minutes start to get a grip on what the software was about. He or she would have a framework to organize new information and learn faster, to make better guesses about what was important and what was not, and to communicate better with the PCB engineers.
As the engineers described new features they needed, I made them walk me through scenarios of how the objects interacted. When the model objects couldn't carry us through an important scenario, we brainstormed new ones or changed old ones, crunching their knowledge. We refined the model; the code coevolved. A few months later the PCB engineers had a rich tool that exceeded their expectations.
Ingredients of Effective Modeling
Certain things we did led to the success I just described.
-
Binding the model and the implementation. That crude prototype forged the essential link early, and it was maintained through all subsequent iterations.
-
Cultivating a language based on the model. At first, the engineers had to explain elementary PCB issues to me, and I had to explain what a class diagram meant. But as the project proceeded, any of us could take terms straight out of the model, organize them into sentences consistent with the structure of the model, and be un-ambiguously understood without translation.
-
Developing a knowledge-rich model. The objects had behavior and enforced rules. The model wasn't just a data schema; it was integral to solving a complex problem. It captured knowledge of various kinds.
-
Distilling the model. Important concepts were added to the model as it became more complete, but equally important, concepts were dropped when they didn't prove useful or central. When an unneeded concept was tied to one that was needed, a new model was found that distinguished the essential concept so that the other could be dropped.
-
Brainstorming and experimenting. The language, combined with sketches and a brainstorming attitude, turned our discussions into laboratories of the model, in which hundreds of experimental variations could be exercised, tried, and judged. As the team went through scenarios, the spoken expressions themselves provided a quick viability test of a proposed model, as the ear could quickly detect either the clarity and ease or the awkwardness of expression.
It is the creativity of brainstorming and massive experimentation, leveraged through a model-based language and disciplined by the feedback loop through implementation, that makes it possible to find a knowledge-rich model and distill it. This kind of knowledge crunching turns the knowledge of the team into valuable models.