Global J. of Engng. Educ., Vol.I, No.2
Printed in Australia

 

Copyright 1997 UICEE


 

Concepts and Methods of Engineering
Design and Practice*


John D. Zakis

Electrical and Computer Systems Engineering,
Monash University Caulfield East, VIC 3145,
Australia
 

 
 

Engineers are involved in the creative solution to complex problems. This paper examines techniques for performing these complex design tasks, most of which have numerous solutions. How do we select solutions in this often extensive solution space? On the other hand, engineering problems frequently do not have a prior solution and we must therefore investigate ways in which to gain access to this solution space. The most famous instance of the discovery of a scientific solution is Archimedes and his exclamation Eureka! (I have found it!). After the insight comes the hard work of validating the idea however, using available components in its implementation or developing new techniques. This paper also examines applications of design validation and transformations techniques, processes which apply not only to the design engineer, but which are implemented in modern design automation software.

* An expanded and revised version of a paper presented at the Congress

 

 
 

[UICEE home page] [Global Journal Home page][Table of Contents][Previous Paper][Next Paper]


 

TABLE OF CONTENTS


INTRODUCTION

Most engineering problems are so complex and have so many variables that an analytic solution is not possible. To be effective an engineer needs to be more than a technologist and must have qualities of leadership, creativity, intuition, spatial visualisation and holistic reasoning.

Engineering design is done in several different ways. A design may be no more than the improvement or enhancement of something already existent, or it may be much more fundamental, requiring the application of creative thought, heuristics and cognition. This brings us to the Helmholtz, Poincaré and Getzels theory of the design process, which involves: First Insight, Saturation, Incubation, Illumination and Validation. The structured design method is also a much used and powerful technique, and the method of transformation modifies an existing design to fit a new specification or functional need. These will be considered in some detail.

Teaching students to understand and use some of these techniques is often challenging as the techniques are contrary to the scientific method and the algorithmic approach to problem solving. The paradigm is one of negotiable specifications and an unbounded set of possible solutions. From this perspective, engineering design may therefore be seen to be solution-based, as distinct from the problem-based approach of science and of research in the engineering sciences.

BACKGROUND

Engineering design is a very complex operation. It can be described as happening in solution space, which may contain an extremely large number of solutions, perhaps only restrained by the available technology. To find a solution in that space however requires much effort in trading off features against cost, weight, power consumption or any other constraint that may be relevant. The path to an optimum solution may involve testing, observing and considering failure, or any number of ways of deciding whether a particular solution is desirable or more desirable than a previously suggested and tested solution. Testing, and the almost inevitable failures or undesirable effects, suggest directions in which a better solution might be found. It is this pursuit of an optimum solution which constitutes a significant part of engineering design. Optimum may be measured in terms of picture clarity, speed of computation, cents per microwatt, or some other reasonable measure. There are always cost-benefits to be traded against each other.

This is then the fundamental cycle, the atom of process, the small movement in solution space: formally it can be called generate and test, or design through debugging, or what if investigation. Engineers often call it playing around or trial and error. Development of any product grows out of thousands of iterations of small piece-wise refinements in the design. Often this process involves no more than an improvement or enhancement of something already existing, but it can be much more fundamental, such as the creation or invention of a new product or tool: the response to an unmet need demanding the development of a new design.

Certain simple problems can be solved by humans and computers using clear cut procedures which move directly to a final solution. The solution to more complex problems however is less linear and more creative, and may involve generating trial solutions which can be analysed so that the resulting drawbacks can be found and eliminated, with a subsequent decision that a new direction is not viable, and consequent return to the previous state of the problem.

Most engineering problems are so complex and have so many variables that an analytic solution is not possible; this is compounded if the engineer has to work in collaboration with needs-analysis specialists in economics, sociology, politics, the environment, law, agriculture, for example, whose involvement can be a distraction from the real issue of solving the engineering problem. Such ill-conditioned problems will often refuse a perfect solution so that a solution with the least number of drawbacks must be sought. Quantifying drawbacks can be equally difficult because what is a drawback in one situation may be a feature in another.

Often a complete solution cannot be obtained due to the complexity of the problem. Furthermore, there are, in general, many possible solutions, yet selecting any one solution is fraught with consequences, and what may appear to be a feasible solution from one perspective may require the omission of important features which may in turn render the design unviable.

BEING STUCK

There is almost invariably a moment in the design process when the designer gets stuck. This being stuck was described by Robert Pirsig in his book Zen and the Art of Motorcycle Maintenance, a book which has attained cult status [1]. He insists that stuckness is the key to innovative design: the void into which you must fall before revelation. We approach a problem from many different angles and the solution seems to be only moments away. Only it gets no closer, and there follows the abject realisation that there is no solution, there is no way out, everything that has been done is in fact inadequate and unsuccessful.

In total despair of ever attaining a solution, in resignation, the mind relaxes, if only for a moment, and there appears, as if by some magic, a revelation. This may be described as the way of the designer, an inescapable part of engineering design. Some may be fortunate in having frequent flashes of inspiration; sometimes this flash does not happen; or it may not happen for a very long time.

Design is not a linear, sequential, logical, analytic, language-based process in the way in which analysis and research are, where linked ideas and sequential thoughts lead to a convergent solution. Instead, design is a holistic process, of seeing whole things all at once and perceiving overall patterns and structures, making intuitive leaps of insight often in the absence of complete data and frequently leading to divergent conclusions and multiple solutions. The design process can therefore be described as essentially visual, with a spatial, relational, global and perceptual mode of thinking. Not surprisingly, much of engineering is done in terms of drawings. The words drawing office have become synonymous with design office in some fields of engineering, much to the abhorrence of electronics and software engineers who use drawings mostly in a documentary role.

It is unfortunate that most engineering schools do not endeavour to develop visualisation and perceptual skills [2]. Creativity seems to be a common trait however and will develop after graduation in a suitable work environment. Perceptual skills combined with verbal skills will be seen as the basic essentials for creative thought in the future.

There are several plausible explanations for the design process. Following the conceptual phase, there are then also various methods of design validation and implementation.

Incubation and Insight

The Helmholtz-Poincaré-Getzels model of creativity, which has been developed over time, is commonly used to describe the creative process [3].

The creative individual is caught up in an idea or a problem that defies solution in spite of prolonged focus. This is the void into which we must fall, insists Pirsig, it is a manifestation of being stuck, and it is the key to innovative design. Suddenly, and without conscious volition, a moment of insight occurs: and thus we have the legend of Archimedes in his bathtub suddenly exclaiming Eureka!, having realised how to determine the relative quantities of silver and gold in the King’s crown without destroying it.

It was the German physicist and physiologist Hermann Helmholtz who, in the late nineteenth century, first described the thought processes involved in attaining his scientific discoveries. He described them in terms of three stages: saturation, incubation and illumination. Most design engineers will readily identify with this process, although it is actually a little more complex.

The French mathematician Henri Poincaré, in describing his thought processes, added a fourth stage which he called verification. This is only logical as, once seen, an idea must be confirmed and the proof written up. Of course this process is not exclusive to mathematicians and it is readily observed in the operation of engineers who will test and validate an idea before it is used as part of a design project.

In the early 1960s this model was further developed by the American psychologist Jacob Getzels, who contributed the important idea of a stage preceding saturation, namely a preliminary stage of finding and formulating the problem. Getzels pointed out that creativity is not just about solving existing problems, but that creative individuals also actively seek out and discover problems to solve that no one else has found. As Albert Einstein and Max Wertheimer state, to ask a productive question is a creative act [4]. Wertheimer goes on to say:

The function of thinking is not just solving an actual problem but discovering, envisaging, going into deeper questions. Often in great discoveries, the most important thing is that a certain question is found. Envisaging, putting the productive question, is often more important, often a greater achievement, than solution of a set question.

This is just as true of engineering design, in particular of research into engineering science which underpins the technology.

Finally we end up with the Helmholtz-Poincaré-Getzels model of creativity:

First Insight

After a problem area is found, we enter into the First Insight phase, which requires an intuitive leap based on an appraisal of the overall scene of the problem. This requires seeing the whole picture and finding parts which are missing, stand out in some way, do not fit or need further investigation. This is largely an intuitive operation in which a huge amount of poorly presented and ill-structured information is processed and from which a key question or a visualisation of a possible solution may come.

It is not a linear, analytic or verbal process; it is more like a chess master seeing multiple scenarios which then need to be looked at more closely to decide upon the next move. To mix two figures of speech, it is being able to see the forest and the trees simultaneously and move the pieces around the scene in visual space, being aware of the spatial relationships.

Saturation

The information gathering phase, the literature search, sometimes described as research. Libraries exist and are organised and indexed in such a way as to help researchers gather information about facts, figures and procedures. Education is frequently seen as a gathering and memorising of information with eventual specialisation so as to saturate the mind with finer and finer details of facts and procedures. This information gathering phase is therefore a familiar, conscious thought activity and is well understood by most. This is not a random gathering of data however; only useful information is selected, as determined by the first insight which gives purpose to the exploration of available information.

There is a set of strategies which can be used for looking at existing information so as to identify the problem more effectively in its context. With these heuristics we can more easily see how things fit within the boundaries of the problem. Adapting the artist’s viewpoint of the creative process, as cited by Edwards, to that of the engineer, we have the following heuristics:

To use these strategies, the heuristics of seeing, requires an understanding of the meaning of the terms [5].

Incubation

Depending on the field of endeavour and expertise of the practitioner, the search for knowledge might be slow and painstaking or the knowledge may be readily available. Once the mind is saturated with information and no logical solution is evident, then the mind gets frustrated or stuck and the problem is put aside, or put into incubation.

This is the phase during which the subconscious pursues its complex visual strategy of perceptual thinking. It is a holistic, nonverbal, intuitive, pattern recognition and synthesis mode, dominated by the heuristics of visual connections and universal relationships. This incubation phase does not in any way interfere with other complex subconscious tasks already in progress, such as walking or driving the car through heavy traffic whilst simultaneously planning the next great party. Many people arrive at their destinations without any memories as to how they got there or even the route taken, the task being safely handled by the subconscious without conscious monitoring, whilst concentrating on more interesting things.

Illumination

It is common for people to have important thoughts, insights or to see the light during boring or repetitive activities such as driving to work, having a shower, fishing, painting or playing golf. In a flash of recognition the solution to a problem is seen and is frequently accepted as being undoubtedly right. These solutions or insights are often spoken of as being beautiful and elegant.

These insights or illuminations are always described in terms of vision and the verb to see: all at once, I saw the answer; my eyes were open at last; I could see a flash of light; how could I not have seen it; the light dawns. Many famous scientists, including Faraday, Bohr and Einstein, have reported that they solved scientific problems in visual images and only afterwards translated their thoughts into words, sometimes with much difficulty. The word see is used in almost every description of the innovation process, such as: creativity is the ability to see problems in new ways; to think creatively we must be able to look afresh at what we normally take for granted; it came to me in a dream in which I saw the solution to the problem.

The word illumination, as used by Helmholtz for the moment of inspiration, is described in the Oxford dictionary as to throw light upon a subject. It is frequently interchanged with intuition or insight. Intuit, according to the Oxford, comes from the Latin word tueri, to look. Intuition is then defined as immediate apprehension by the mind without reasoning or immediate insight. The etymology of these and related words all relate to sight. Further variations on this theme are easily seen in the words foresight, hindsight, clear-sightedness and expressions such as blind to the idea. All refer to sight in the sense of coming to recognise mentally, to grasp a meaning or to understand, as in I see it now.

Words are totally absent during illumination and usually also during any serious thought. So much so that immediately after hearing or reading a question, every word disappears from the mind as one begins to think. Conversely, thoughts die the moment they are embodied in words as we enter the verbal explanation and translation phase. This non-verbal thinking process is variously described by Einstein, Leibniz, Kant, Schopenhauer and Hadamard [6][7].

Validation

After apparently stumbling across the answer unexpectedly and with incredible luck, being almost certain of its correctness having seen the light, the designer is able to get on with the job. The next stage is the validation phase during which the idea is thoroughly tested, analysed, evaluated and written up. With a new sense of confidence the creator proceeds to implement the custom chip, write the killer software application, compose the sonata, write the mathematical proof or reorganise the company structure.

Verification is a fairly well understood, conscious or analytical activity. A new idea needs to be verified and put into a form which can be used by others. This is the first embodiment of a new design. It will demonstrate the feasibility of the new idea, although the final version may be totally different from the prototype. Although creative thinkers report many instances of rapid validation of an idea, such as merely writing down the proof or the sonata, it is frequently painstaking and time consuming, perhaps taking many years to reach fruition. The process of validation itself however is fairly well understood as it involves procedures that have been studied, analysed and can be specified. A major aim of western education is to teach students how to verify ideas and produce unassailable proofs.

The following are essentially various methods of design verification and refinement.

Top Down - Structured Design

Top down or structured design is a means of managing complexity. A complex system can be more easily understood if it is decomposed and represented by the interaction of a number of functional modules*. A motor car, for instance, can be described as having a body, an engine, transmission and wheels. This is a concept-level which even the technically incompetent can comprehend: one gets in, starts the engine, selects reverse or drive with the transmission and drives away. As we descend the hierarchy, understanding becomes more difficult as the complexity of the technology increases: the engine has pistons which are moved up and down by a crankshaft and connecting rods, it has valves which are opened and closed by a camshaft, and it has a fuel injection system controlled by a computer. The fuel injection system can be decomposed to its structural elements, of which the computer is a complex module which can itself be broken-down into many levels of complexity. The computer contains modules such as the Central Processing Unit (CPU), memory and so on. The CPU is made up of conceptual units called registers which are in turn made up of individual logic gates. The gates are implemented in terms of transistors which are in turn implemented in silicon in terms of geometrical structures with carefully controlled properties. To understand a motor car in terms of the interactions of the countless underlying primitives is impossible.

This divide and conquer method is a much used and a very powerful design methodology. Essentially, a design is partitioned into modules or sub-designs in some logical manner until a library component or existing design module can be fitted into a sub-design. The design then proceeds by identifying available components and designing their interfaces or the glue components. This methodology is available only to the experienced design engineer however as a priori understanding and intuitive knowledge of the finished product is necessary to sensibly partition the design problem into designable modules and their interfaces. Recognition of standard concepts or library modules is crucial in this style of design.

The library components themselves are also designed in a similar manner, by logical decomposition into functional modules until a manufacturing primitive is recognised. What constitutes a library component is determined by the level of abstraction at which we are working. A computer is a library component for the fuel injection system designer, as are various sensors to determine engine operating conditions and the driver’s demands. However, to a computer designer, the CPU, memory, peripheral devices and glue logic are the building blocks or library components. The CPU designer then sees several levels of library mapping which are necessary to transfer the design concepts into silicon cell descriptions which can then be translated into functional components by the silicon foundry.

Top down design is notionally highly elegant and it is the way in which engineers and software developers are taught to design things. These methods are not really applicable to problems for which the formal solutions are not already known however. Take as an example the requirement to design a computer. How do we partition a computer into functional modules which can then be implemented as separate design tasks? Obviously there is a central processing unit, memory registers, input-output facilities, external devices such as disk drives. But it is obvious only because we already know how to construct a computer and we understand computer architecture. To design a new computer requires only the transformation of a standard architecture into something a little different, because there are some design constraints that we would like to implement. Creating a totally new computer architecture is a non-trivial problem which has occupied the minds of computer designers for generations.

Library Mapping

In terms of ordinary engineering design, library mapping may involve a designer recognising standard components, such as commonly available or previously designed functional blocks, by dividing a complex specification or system requirement into modules which may implement more easily recognisable functions. The designer’s job is therefore to look over a specification and to attempt to visualise the means of implementing some aspects of it. Rather than an algorithmic process, this is a trial and error procedure in which something is implemented and then an assessment is made as to how the remainder of the design problem fits around that. By piece-wise refinement of the way in which the problem is subdivided, we generate an architecture which can be implemented in terms of its component parts. These component parts may be further subdivided, into sub-components working on a functional basis, from preconceived ideas of how things ought to be designed and built. The designer eventually arrives at a small enough subdivision of the functionality so that it can be recognised as a library component or a piece of hardware that can be bought over the counter.

In the arena of electronic circuits and systems this procedure may be done somewhat differently if implementing a design in terms of prepackaged component modules such as Programmable Gate Arrays (PGA) or Applications Specific Integrated Circuits (ASICs). Decomposition of the design problem in this case must extend as far as to logical primitives, in terms of logic gates, flip flops or other micro components, in such a way that the circuit synthesis can then be performed by combining primitives into useful functional blocks of types available in the chosen PGA or ASIC.

Electronic circuit synthesis tools which are useful in this library mapping problem are now becoming available [8][9]. A programmable logic array might map into gates, flip flops, counters or whatever components are available in the structure of the particular circuit into which the design needs to be mapped. This freedom is greater with an ASIC design because a design which is decomposed to a sea of gates can be composed to directly map into the available design primitives so that no macro-cell library is necessary. Macro-cell libraries may be used to good benefit by simplifying the structure into macro-cells which have been previously optimised for the chosen implementation technology. Circuit synthesis then proceeds by state assignments and building up ever larger components until the input-output signal functionality specification is achieved.

The choice of codes for states severely affects the complexity of the resulting logic, which is why state assignment is one of the most important optimisation problems in the synthesis of sequential machines. This problem is computationally complex however; it is Non-Probabilistic hard (NP hard). In a strict sense it has never been solved except for exhaustive search, which, for large machines, is impossible even using a fast computer. It therefore becomes necessary to develop heuristics in order to reduce the search space to a manageable size and to keep the best solutions in that reduced space. Only the most promising multiple decompositions should be implemented and tested. Sufficient knowledge about the features of the internal structure of a sequential machine, which influence the quality of a decomposition, is needed and it must be used efficiently during the search process. For machines that have an internal structure composed of a number of easily distinguishable and loosely connected sub-machines, each containing tightly connected states, often high quality assignments can be constructed in this way. Even the best multiple sequential full decompositions however cannot guarantee the optimality of the resulting state assignments, because the search space will be restricted to a special case.

These large and complicated sequential machines that define today’s control systems and serial processing units are both difficult to design and optimise, and to implement and verify, hence the necessity for computer-assisted design tools [10]. These design tools can perform the necessary decomposition of large sequential machine descriptions into smaller and less complicated partial machines. If a design cannot be fitted into a single special purpose chip or field programmable array, then it has to be partitioned, in an optimal way if possible, into subsystems. Each subsystem can be implemented in an available field programmable component. Programmable logic array implementation techniques, such as Programmable Logic Array (PLA), Programmable Array Logic (PAL), Programmable Logic Sequence (PLS) and Programmable Gate Arrays (PGA), dictate that special decompositions may be required to map the logic into the structural components available in the chosen device. Various decomposition strategies may be used in order to make it possible to implement a given sequential machine within the constraints of the existing building blocks which are available in a component, or perhaps within a limited silicon real estate area. Perhaps the goal may be to minimise the complexity of the resulting circuit or to improve other design parameters such as testability, speed, cost or power consumption.

Unfortunately, traditional logic modelling tools model circuits and complete systems in terms of their non-redundant functionality, that is in terms of logic primitives such as and, or, not, mux, exor. Instead these should be modelled in terms of the structural elements which are actually at the designer’s disposal in a chosen technology. For example, the commonly used Boolean algebra enables us to express all possible Boolean functions, but it fails to model their internal structures. It is possible to decompose logic functions exclusively into networks of logic primitives. These may alternatively be decomposed into any appropriate network sub-functions. The practice of targeting independent logic synthesis and the consequent use of postsynthesis technology mapping is a result of the lack of appropriate building models and synthesis methods for digital circuit structures and, in consequence, the opportunities created by microelectronics technology cannot be fully exploited. It is important to develop a new generation of methods which will efficiently and effectively deal with this level of design complexity and the characteristics of the building blocks actually available in a particular implementation technology. This will enable modelling and synthesis using the available circuit structures and providing correctness by construction, together with verification and intelligent search algorithms to explore effectively and efficiently the huge space of correct circuit structures so as to find an optimum.

The decomposition of sequential machines refers to the transformation of a given machine into an interconnection of two of more partial machines in such a way that the original behaviour is preserved. Such a process can be performed better and more quickly with a computer-aided design tool than can be done manually, as an intensive analysis of the sequential machine is necessary in order to get high quality solutions. Heuristic computer analysis methods can be extremely efficient through their particular use of the relevant knowledge about the internal features of the sequential machine which influence the quality of the decomposition. This is a key factor which makes it possible to reduce the search space to a manageable size and to keep high quality solutions in that reduced space at the same time. It can be concluded that even an experienced designer is unable to propose many high quality solutions to many instances of this decomposition problem in reasonable time, because the decomposition decisions are strictly dependent on the instance of the problem and require complex and extensive analysis concerning the interrelated structure parameters. Simultaneous satisfaction of multiple constraints and optimisation of multiple objectives is very complex, so that full computer-aided automation is necessary in order to achieve this.

Two sorts of decomposition, simultaneous and sequential, are feasible for sequential machines:

The most important design parameters of a circuit for implementing a sequential machine are functions of the machine’s internal states and the inputs and outputs. The possibility of implementing a machine as an array of logic building blocks depends on the number of internal states, as well as the number of inputs and outputs. Decomposition of the states is therefore necessary from a practical view point; that is, a full decomposition is required in order that the machine architecture can be built up in a manner to correspond with the available architecture of the submachine or the integrated circuit architectures available.

The advantage of parallel (simultaneous) decomposition over sequential decompositions is that the former allows the non-active component machine to perform internal operations while it is inactive from an external viewpoint, whereas the latter only allows one component part at a time to be active, which makes execution time longer as the available outputs of one machine must be present before the next stage can perform any actions.

Parallel and serial full decompositions with decomposed internal states are based on the internal features of a sequential machine and the choice of partitions and their orthogonality. Full decomposition is an approach which requires the transformation of a sequential machine or Boolean function into a straightforward structure of two or more co-operating partial machines in such a way that the original machine’s behaviour is retained. It entails a multidimensional decomposition of a machine in terms of all the important structural elements, such as inputs, outputs, internal memory elements and functional units. In this procedure the behaviour of the machine is specified in the form of an original sequential machine or Boolean function and the associated physical constraints and objectives are modelled as the constrained satisfaction problem.

Transformations

Transformation, in which an existing design is modified in some way to fit a new specification or functional need, is possibly the most used method of design in industry. Modifications may range from minimal to a full abstraction of a concept and its subsequent transformation into a completely different problem domain. This method has application in automatic computer synthesis of circuits and systems [10][11]. It is an important concept in circuit synthesis because the completed design can then be checked by applying the inverse transformation to the finished design. This inverse transform is available as part of the original synthesis procedure of course, so that design correctness can be verified by comparing the transform to the original circuit or system which is known to be correct.

Decomposition, followed by logic synthesis methods using transformations, enables us to deal effectively and efficiently with the characteristic features of building blocks. Contrary to traditional logic design methods, this enables the modelling and construction of functionally correct digital modules and digital circuit structures, and solves the problem of design validation very efficiently. It also enables extensive examination of solution space and allows better exploitation of the target machine’s structural features in relation to a given set of objectives and constraints. Additionally, the partial circuits are smaller and easier to design, optimise, implement and test. Correctness by construction, and post factum verification, are two complementary validation approaches which should be used jointly during the design process. Designing with previously proven modules can also make post factum verification much easier and faster through use of the structural knowledge which is generated during the synthesis procedure.

Design verification is a complex process in general because the sequence of transformations which have to be performed in order to show that the implementation satisfies its specification is unknown. On the other hand, verification is simple using transformation as a computer-aided design tool, with the sequence of transformations known from the information produced during synthesis [9]. If information about the transformations used is recorded during synthesis, then finding the reverse transformations and performing the reverse mapping are both very easy. Since the verification process is performed by using reverse operators to those used during synthesis, the probability of masking synthesis faults by verification faults is negligible, and verification performed in this way is very reliable. Verifying the physical constraints consists of estimating the parameters involved by using the abstract modelling or low level synthesis tools, or simulation, and checking the estimates against the constraints.

Transformation is also known as mapping. In traditional and software engineering there are many similar problems, problems which are structurally similar, with similar input and output requirements, but which effectively solve a different problem. The same type of algorithm or machine which performs the algorithm either in software or hardware is then equally applicable. The number of inputs and outputs may also be varied by doing a mapping of one to many, or many to one, to adjust the number of input and output parameters.

Transformation can be achieved by decomposing the algorithm or procedural description into primitives (concepts), then assembling these primitives into the available high level (library) modules such as objects or modules. When these are appropriately joined together, we have a flow through the machine, in either software or in hardware, from the inputs through various functional units to produce an output.

The microprocessor is the modern all-purpose device which can be adapted to solve an essentially infinite set of problems. These may be embedded systems such as the engine management unit of a car, or a manufacturing process controller. A general purpose computer is different only in that it has a special user interface for communicating with humans. These transformations are obtained primarily by software, the programs which run in the microprocessor and perform the algorithmic procedures required for the application. In civil engineering, an urban traffic flow analysis model can be transformed into a pedestrian mall or shopping centre traffic analysis tool. This is simply done by modifying some of the variables and the interpretation of results.

Overview

Many things might be designed in many different ways. The first computer, invented by Charles Babbage, was a mechanical contraption of such complexity that building it was not attempted for nearly 150 years after its conception. As it was conceived, Babbage’s Analytic Engine is a general purpose programmable machine with features that are remarkably similar to those of a modern electronic computer; it has, for instance, a separate store and mill, which are equivalent to the memory and processor of a modern computer. This design required a number of modules, such as automatic multiplication and division mechanisms, and it is one of these, embodied as Difference Engine No. 2, which has recently been built at the Science Museum in London [12]. This is a much refined and simplified design consisting of only 4000 precision parts compared to the required 25,000 parts for the original Difference Engine, of which only half were completed before the British government withdrew funding in 1842.

In the meantime, other technologies, such as electronics, have made computers not only possible but actually low cost items. And thus solution space changes! The invention of the valve and later the transistor in effect created the art and science of electronics. This was followed by the invention of the integrated circuit, which then made possible the design of practical computers which have functional properties equivalent to those of Babbage’s invention, but which are nonmechanical (with no moving parts). This technology has advanced very rapidly and within 50 years computers have gone from laboratory curiosities to esoteric and expensive toys, to low-cost and ultra powerful machines which are affordable for the average person. So being stuck in this instance took a very long time, 150 years.

Being stuck and having no immediate plans or solution is inherent to engineering design. An engineer must be able to live at this edge of despair knowing that with some luck a solution will occur. This inventiveness, this creation of something out of nowhere, is the heart and soul of engineering. There is a caveat however: inspiration does not emerge from a vacuum. Ideas do not emerge from ignorance; complex ideas require a broad and deep knowledge base.

Engineers operate in a complex world of high technology; this is especially true of electronics and computer engineering where technology is marching forward at a phenomenal rate. As it marches forward, new openings occur, new solutions to previously impossible problems become possible. The engineer who is looking for creative solutions and wonderful inspirations must do so in a technical context, and must have a deep understanding of the available technology so that radical ideas can be interpreted in terms of the technology. Presently hopeless or technologically infeasible ideas are put aside, at least until another turn of the wheel of progress.

Solution space, which is a broad multidimensional bounding of the possible, is ever changing. What was conceived by Babbage, but which could not be realised in his time, eventually became a real possibility. With numerically controlled milling machines, the mechanical version has become manufacturable and electronic implementations of this system are readily achieved and go well beyond anything that Babbage might have envisaged at the time.

Envisaging a solution or a set of solutions to problems does not necessarily constrain them to a particular technology. Solutions may be quite abstract, meaning that they are notional solutions abstracted from current technology in such a way that, as technology changes, the concept can be applied using these new technologies. The very best engineers design in the most general way. If additional features can be readily provided, or if a device or tool can be designed to be general purpose, where the general purpose includes the currently desired functionality, then of course it becomes easy to adapt or enhance the tool for future requirements. In this way a design concept can have a very long and useful life as against products which are designed specifically to solve one particular problem and are unable to be altered or modified to solve a related class of problems. This can be described as bad design, although engineering design is often done in this way under the guise of economy, or niche markets, or simply the application of multiple patches to an inadequate existing design which needs to be enhanced in some way.

CONCLUSION

The Helmholtz-Poincaré-Getzels model of creativity requires that a first insight is followed by saturation with the problem; stand back from the problem, let it incubate and wait for the flash of insight. Test and evaluate the insight with the view to re-evaluating the problem and verifying the validity and functionality of the solution.

The divide and conquer method is a much used and powerful technique. Essentially, a design is partitioned into modules or sub-designs in some logical manner until a library component or existing design module can be fitted into this sub-design. The design proceeds by identifying available components and designing their interfaces.

The method of transformation uses an existing design which is modified to fit a new specification or functional need. The modifications may range from minimal to a full abstraction of a concept and its subsequent transformation into a completely different problem domain. This is also a method with application in automatic computer synthesis of circuits and systems because the design process can then be verified by applying the inverse transformation to the finished design.

Teaching students to understand and use some of these techniques is often challenging as they are contrary to the scientific method and the algorithmic approach to problem solving which goes with a technical education. Students do find these techniques to be useful in many ways however.

There is a tendency, in particular with students, to leave things to the last moment. One of the most difficult aspect to teaching is convincing students that they have to start thinking about a design problem now, instead of the day before the design assignment is due. The need to become embroiled in a design problem immediately, so that their subconscious has an opportunity to incubate and do the creative work for them, is not at all apparent until it has been successfully done a few times to eliminate the apparent luck factor.

All successful designers operate in this way, be they artist, inventor or engineer. Developing this skill is therefore essential to anyone who aspires to be a successful design engineer. The primary reason for the existence of engineers and the need for an engineering profession is for creative solutions to technical problems after all.

REFERENCES 

1. Pirsig, R. M., Zen and the Art of Motor Cycle Maintenance. New York: Morrow (1974).

2. Field, B. W., Why do we educate only half the engineers? Proc. 7th AAEE Annual Convention and Conference, Melbourne, Australia, 350-354 (1995).

3. Edwards, B., Drawing on the Artist Within. Collins (1987).

4. Wertheimer, M., Productive Thinking. Harper & Row (1945).

5. Edwards, B., Drawing on the Right Side of the Brain. Jeremy Tarcher (1989).

6. Hadamard, Jacques, The Psychology of Invention in the Mathematical Field. NY: Dover Publications (1945).

7. Wertheimer, Max, Productive Thinking. NY: Harper and Row (1945).

8. Jozwiak, L. and Spassova-Kwaaitaal, T., De-compositional State Assignment with Reuse of Standard Designs. EUT Report 90-E-247, Eindhoven University of Technology, Netherlands (1990).

9. Jozwiak, L., Modern Concepts of Quality and Their Relations to Model Libraries. Workshop on Libraries, Component Modelling and Quality Assurance, IRESTE - IHT, Nantes, France, 97-116 (1995).

10. Jozwiak, L., Simultaneous Decompositions of Sequential machines. Microprocessing and Micro-programmming, 30, 305-312 (1990).

11. Jozwiak, L. and Kolsteren, J.C., An Efficient Method for the Sequential General Decomposition of Sequential machines, Microprocessing and Microprogrammming, 32, 657-664 (1991).

12. Swade, D.D., Redeeming Charles Babbage’s Mechanical Computer. Scientific American, Feb, 62-67 (1993).

An expanded version of a paper presented at the Congress

 Biography

John D. Zakis


[UICEE home page] [Global Journal Home page][Table of Contents][Previous Paper][Next Paper]