From Mainframes to Main Street - UCSB College of Engineeringdmell/documents/2010...and in computer...

14
Abstract of CACHE Lecture for Computing in Chemical Engineering Education Award From Mainframes to Main Street Presented at 2010 AIChE Annual Meeting Salt Lake City, UT Abstract # 193683

Transcript of From Mainframes to Main Street - UCSB College of Engineeringdmell/documents/2010...and in computer...

Page 1: From Mainframes to Main Street - UCSB College of Engineeringdmell/documents/2010...and in computer data acquisition/control applications … or not, living and working during that

                             

Abstract of CACHE Lecture for Computing in Chemical Engineering Education Award

From Mainframes to Main Street

Presented at 2010 AIChE Annual Meeting

Salt Lake City, UT

Abstract # 193683

Page 2: From Mainframes to Main Street - UCSB College of Engineeringdmell/documents/2010...and in computer data acquisition/control applications … or not, living and working during that
Page 3: From Mainframes to Main Street - UCSB College of Engineeringdmell/documents/2010...and in computer data acquisition/control applications … or not, living and working during that

1  

Computing in Chemical Engineering Education: From Mainframes to Main Street 2010 AIChE Annual Meeting in Salt Lake City, CACHE Symposium, Abstract #193683

Duncan Mellichamp, Professor Emeritus

Department of Chemical Engineering University of California

Santa Barbara, CA 93106 Summary

This paper, acknowledging receipt of the CACHE Award 2010, provides an opportunity to present personal recollections and reflections about computers, computing, and applications to chemical engineering education based on 50+ years of direct participation and observation.

Modern computing technology has successfully changed chemical engineering

education and practice so they now derive substantially from the predictive ability of our science-based models and much less from intuition, ad hoc methods, and correlations that were taught and used in the past. Employing first principles models requires significant computing capability, far beyond that of early mainframe computers of the 1950’s. No one could have guessed how rapidly mainframe performance “just” 50 years ago would become so widespread, then orders of magnitude more powerful—first with desktops, then in laptops, and now in “walking-around” cell phones (ergo, the reference in the title to “main street.”) As discussed below, predicting where computing technology will go, and how fast, is full of uncertainty. In the 1960’s, control engineers enthusiastically spent much time/effort learning to use and then employ analog computers in the applications areas of that time. These relatively specialized devices were ideally suited (at least in principle) for representing and solving ordinary differential equation models. They had been first employed in the aerospace industry to explore flight characteristics of candidate airframes without having to build them. By 1960, the special capabilities of analog computers, with the ability to solve large ode models at computation rates of 1000 solutions per second, were being coupled to the relatively primitive digital computers of the era. The result, “hybrid computers,” provided a way to solve complex ode models employing the “repetitive operation” mode of the analog machine and the complex logic capability of the digital to provide very rapid model optimization, model parameter fitting, etc. It is interesting to conjecture that many of the first chemical engineers who worked with hybrid computers were also among the first to jump into the use of the early commercial control computers (e.g., IBM 1800) as they became available for data acquisition and control, then to the first laboratory minicomputers for data acquisition and control (the field referred to at that time as “real-time computing”), then to the first off-the-shelf process control implementations in university laboratories. This paper simply poses that hypothesis as a way to outline the career path of the author who was among the earliest chemical engineering digital and analog computer users and who did follow that path. One characteristic of operating along this trajectory, at least for an academic, is that a tremendous amount of energy was required to get into the field, to stay current, and then to utilize the current technology for productive research. But that investment inevitably was lost as equipment became cheaper, faster, more capable, then completely ubiquitous, and as software designed to deal with many previously specialized applications became commercially available.

Page 4: From Mainframes to Main Street - UCSB College of Engineeringdmell/documents/2010...and in computer data acquisition/control applications … or not, living and working during that

2  

Specialized computer hardware was replaced by standardized (commercial) analog and digital interfaces. And custom software, often laboriously developed in machine or assembly language, was replaced by off-the-shelf applications packages, such as LabView. These could be used to connect experimental processes to computers for data acquisition/control applications quickly, and allowed a relatively unsophisticated user to implement sophisticated methodology in the laboratory. Suddenly, it became possible to implement an adaptive control algorithm that took the author six months in the early 1960’s—essentially building a specialized computer from a commercial analog computer and a bagful of relays ordered from Allied Radio (the Radio Shack of that period)—a mere several weeks; and that was only 20 years later! It might be asked: did the urge to design/build applications for research or production purposes force laboratory and process computer users into a mode where much time/energy was expended pushing an envelope that from the user’s perspective was “self-receding?” In other words, with technology developing and becoming generally available almost as fast as applications could be implemented/demonstrated, were users chasing a moving target? Here two quantitative analytical terms from mathematical control theory can be stretched to describe this qualitative situation. From the users’ viewpoint, such a changing environment was “uncontrollable” to the extent that, though desirable, rapid commercial developments in the computer field were completely out of users’ hands. The situation also was (at least) partially “unobservable” by virtue of the new/better/cheaper technology that appeared on the market with no advance information. Even if one was willing to spend much time attempting to stay current with the field, there was no way to anticipate the market. The author takes no position on whether “chasing such a will-o’-the-wisp applications culture” was an intelligent way to pursue a career, nor does he have proof that the above characterization is a true representation of what actually happened to anyone else. But whether that is a reasonable characterization of key developments in the early history of process control and in computer data acquisition/control applications … or not, living and working during that period was certainly exciting and fun. In the last 20 years the author has chosen a different path, one where someone else does the development work, leaving the user to reap the benefits. This approach has taken him away from laboratory computing and experimental work, away from anything that requires specialized hardware/software much beyond what a typical undergraduate can utilize. Beginning well before my retirement in 2003, it became clear that the future of computing would be with standardized software—spreadsheet methods (e.g., Excel), editing packages (Word), presentation packages (Power Point), computation packages (Mathematica, Matlab), and design packages (Aspen, Hysys). In short, let someone else do all the development work and take the accompanying risks! Since retirement and a return to teaching plant design with colleague Mike Doherty, now pro bono, I have come to an opinion appreciated perhaps only by older engineers. Computing has become so powerful, so cheap, so available that it has begun to force out the traditional ways of thinking that made engineers different from scientists, namely the ability to use ad hoc and short cut methods. The old way attempted to “get 90% of the desired information and important conclusions from just 10% of the effort.” Now, using modern computer technology, one can develop an exceptionally detailed process model, run it, optimize it, and “know 100% of what is desired and to as many digits of precision as desired” with little programming effort.

Page 5: From Mainframes to Main Street - UCSB College of Engineeringdmell/documents/2010...and in computer data acquisition/control applications … or not, living and working during that

3  

Well, maybe not! Students today, perhaps even practicing engineers, can easily be led into a trap in which the detailed results from computer solution appear to represent reality, but may not because of process model errors. Or the results of a carefully constructed optimization study may totally miss the optimum point because of unsuspected parameter errors. Or straightforward and valuable generalizations cannot be deduced from a detailed computer study because a mere human being cannot collapse the large number of numerical/graphical results into a compact mental model that can lead to clear/concise and generalized conclusions. An example of the latter situation will be illustrated in an area the author has re-explored in the past few years, namely use of spreadsheets to develop Discounted Cash Flow (DCF) evaluations. These results typically determine potential profitability of plant designs at the conceptual design stage, i.e., before going into detailed mechanical design. Most chemical engineering bachelors programs still expect students to analyze their plant designs at that level, i.e., where estimates are generated for capital expenditures and for annual revenues/expenses. Predicting the Future of Computing Probably the most graphic illustration of how far off an earnest prediction of the future of computing can be is provided by a projection that came out of the Rand Laboratories in Santa Monica in 1954, was published in one of the popular science magazines of the time, and is now found easily on the Internet. The famous “think tank’s” study attempted to develop a mock-up of the home computer of 2004, i.e., 50 years after the 1954 study. But looking at the frightfully large equipment, even by 1960’s standards--the old fashioned TV monitor, printer and gauges--and by reading the caption announcing that the computer would be easy to program (in Fortran), one can sense that order of magnitude errors are involved in this projection. Seeing the scientist of the period posing with this equipment, and wondering what he was to do with the two large wheels, makes this individual all too happy to shy away from predictions, particularly if there is any chance that Moore’s Law will continue to apply in the future. Below: “Home Computer of 2004.”

Page 6: From Mainframes to Main Street - UCSB College of Engineeringdmell/documents/2010...and in computer data acquisition/control applications … or not, living and working during that

c �

Times haven’t changed. Consider Digital Equipment Corp., or Data General and their minicomputers that dominated the 1970’s, or Nokia, just a few years ago the dominant force in the cell phone market, now scrambling for survival. Their “mistake” was failing to abandon a successful product line and enter a new market (so-called “smart phones”), which momentarily is dominated by Apple/others. “Unforeseeable” major changes in technology are difficult to predict.

Fall 1957—IBM 650: Georgia Tech’s first Undergraduate Student-accessible Mainframe

One key feature of the author’s career was the opportunity to work with several early mainframe computers in the 1950’s (e.g., IBM 650 whose central processing unit is shown below) and early 1960’s (a UNIVAC machine). The large amount of auxiliary equipment associated with mainframe facilities can only be hinted at here … think large rooms packed full of racks housing auxiliaries such as printers, disk/drum drives, magnetic tape drives, etc.

The widespread availability of information on the Internet makes it relatively easy today to document experiences related to computers from the past. Thus mainframes and other interesting equipment from the author’s background are discussed in the full talk (web address given in the References). There the pleasure of working with computer languages in a pre-Fortran era is noted.

The IBM 650 machine and other similar ones of the era operated without a compiler. Rather it used a clever “three-address” Bell Lab interpreter that operated directly with decimal numbers and required the user to keep track of the hard addresses for each of the variables stored in the small amount of available drum memory. I took a first programming course with the idea of solving the trial-and-error design of a multi-pass shell-and-tube heat exchanger in the junior unit operations course. But the assignment came due before I had time to learn to use the computer’s primitive logic operations. One day before it had to be turned in, I submitted my first computing job, printing out all possible feasible solutions on the large computer printer paper of the time, maybe 100 full pages! My only hope, to choose the best solution by manual inspection, worked like a charm. But a note from the computer operator indicated not to ever pull that trick again.

Page 7: From Mainframes to Main Street - UCSB College of Engineeringdmell/documents/2010...and in computer data acquisition/control applications … or not, living and working during that

5  

Winter 1958 -- The Bendix G-15: The First Personal Computer One machine few people will have worked with was the Bendix G-15 computer, often referred to as the first personal computer. Relatively cheap ($60,000 in 1958) and small (about the size of a home refrigerator), the Bendix G-15 could be programmed and operated by a single individual. (Below: G-15 is opened to expose its innards.) I showed how the laboratory G-15 could be used to optimize spinning machine set-ups so as to minimize transit time and heat build-up in the plant’s polymer transfer lines, an important operation in a textile fiber manufacturing facility.

God’s Test for an Individual’s Frustration-Shedding Ability: Analog and Hybrid Computers Anyone who came into the process control and automatic control fields around 1960, as the author did, learned how to utilize analog and, later, hybrid computers. These beautiful assemblages--of operational amplifiers, potentiometers, precision resistors and capacitors--required manual dexterity (as the components all had to be wired together via a removable patch panel), basic knowledge of DC circuit theory, and infinite patience to “scale” the equations in terms of the extensive variables (everything had to be expressed in a -10 to +10 VDC range) and time (or length, if the independent variable). But once programmed, all potentiometers set accurately to represent the fixed parameters, and all set to work, they could solve differential equation systems at bandwidths that only became achievable recently using digital simulators. Another problem, early analog computers used vacuum tube-based amplifiers. These had to be checked often and any non-functioning tubes replaced, or the results obtained were just so much garbage. The commercial “king of the hill,” an Electronics Associates 231-R Pace machine, was used by the author in a summer job with Dow Chemical in 1960 to develop a model for the kinetics of dehydrogenation of ethylbenzene to styrene. The facility cost on the order of $1MM and required a squad of electronic technicians to keep it operational. Advertising copy of the day showed attractively attired men and women lounging in chairs by the computer. These were simple decoys intended to promote the idea that analog computers were easy and convenient to use. Wonderful devices, but not easy to use! Learning how to program the machine took about one month; programming the relatively simple equations for the reactor took about a month. Finding the best-fit parameters for the kinetic model took no more than a few days.

Page 8: From Mainframes to Main Street - UCSB College of Engineeringdmell/documents/2010...and in computer data acquisition/control applications … or not, living and working during that

6  

Analog computers lingered for a decade; later versions utilized transistorized amplifiers, were fitted with binary logic modules and interfaced to early digital computers to yield the so-called hybrid computer. Hybrids were useful for iterative optimizations, for example, for optimizing trajectories. But eventually engineers discovered that a slow digital simulation would run all night unattended. Slow was acceptable compared with the frustrations of the analog. Beginning Over: University of California, Santa Barbara In 1966 I left a great job as Research Engineer with the Textile Fibers Department of the DuPont Company in North Carolina and moved to Santa Barbara to help establish a brand-new chemical engineering program. A first action I took with my meager start-up allowance was to buy a reconditioned EAI TR-48 transistor-based analog computer to use for instruction and research. (Below, a young and naïve assistant professor all dressed up to be photographed with his shiny new toy. He is unaware that the digital computer, with its ease of programming, will completely eclipse the raw computing speeds of the analog within a matter of several years.)

Fortunately, I was already facile in both worlds … probably the reason why my early jump into the real-time computing world was a completely natural one. Laboratory & Real-Time Computers—ADCs/DACs, Machine and Assembly Language, #%&@!! A conjecture left to develop in the full presentation concerns early workers in the area of real-time computing (aka data acquisition/control computing). While not provable, it is true that many of the first participants on the CACHE Task Force on Real-Time Computing were people with analog or hybrid computing savvy. These people were willing to work hands-on, had knowledge of electronics, and could program in low-level languages (the assembly- or machine-language required for device drivers not supported by the computer manufacturer), etc.

Page 9: From Mainframes to Main Street - UCSB College of Engineeringdmell/documents/2010...and in computer data acquisition/control applications … or not, living and working during that

7  

The earliest commercial laboratory digital machines—the Digital Equipment Corp. DEC PDP-8, later -12, and the Data General NOVA (shown below) and later the DG Eclipse, all minicomputers—cost on the order of $10-15,000 for an entry-level machine and typically required a user to build his1 own input/output peripherals, assemble hardware, design and program interface drivers in machine or assembly code, and write applications software.

On reflection, many early experimental applications were amazing boot strap efforts. The full presentation covers several in the author’s own laboratory, including the computer-controlled model railroad (below) used to teach real-time computing to undergraduates beginning in 1974.

                                                                                                               1  The author recalls no women who were involved in these early activities, so a plausible counter thesis is that early participants self-selected on the basis of willingness to endure much futile head banging, a path that non-males apparently were too smart to go down.  

Page 10: From Mainframes to Main Street - UCSB College of Engineeringdmell/documents/2010...and in computer data acquisition/control applications … or not, living and working during that

g�

Also elaborated in the full presentation are two “firsts” in chemical engineering laboratory applications: (1) adaptive control of pH (Mellichamp, 1964) and (2) control of an unstable distributed parameter chemical process (Wong, Bonvin et al, 1980). In the former case, an analog computer fitted with binary logic (mechanical relays) was used for the implementation. In the latter, which involved feedback control of an autothermal tubular chemical reactor, a laboratory real-time computing system equipped with standard analog I/O utilized eight axial temperature measurements in the catalyst bed to synthesize one reactor heat input to shift the only RHP pole.

Development of Real-Time Computing Materials and Textbook

During the period 1973-75 the author received a fairly large NSF education grant to build experiments and to put together one or more lecture sequences and the teaching materials required in the new area of real-time computing. One of the conditions of the grant was that an outside committee would visit the UCSB campus annually to provide feedback on the work in progress and (in principle) members would adopt/popularize the new developments. We were able to attract a really special group of the top people in the new field. Within a short period of time the group became the CACHE Real-Time Task Force and we decided to write a set of “Real-Time Monographs” to cover the fundamentals and practice in the new area.

In relatively short order, certainly for any 100% volunteer project, the group produced and disseminated a series of eight monographs during the period from 1977-79. These covered the full spectrum of materials and information that untrained faculty needed to enter the field.

CACHE Monographs on Real-Time Computing D. A. Mellichamp/Editor

CACHE Corporation/Publisher

Page 11: From Mainframes to Main Street - UCSB College of Engineeringdmell/documents/2010...and in computer data acquisition/control applications … or not, living and working during that

/�

Not content to leave “well enough alone,” members of the Real-Time Task Force updated and revised the monographs into textbook form, and the editor “shopped the concept” to several publishers. It was chosen by Van Nostrand Reinhold as an alternate selection in their professional book club and published in 1983 (Below).

Following a relatively decent press run, rapid changes in the field occurred, including the introduction of commercial laboratory real-time software, e.g., LabView, for general experimental applications. The need for specialized development for each application was eliminated; thus the glory days of real-time computing ended in the mid-1980’s, though applications will continue as long as scientists and engineers run experiments.

Page 12: From Mainframes to Main Street - UCSB College of Engineeringdmell/documents/2010...and in computer data acquisition/control applications … or not, living and working during that

10  

Development of Process Control Textbook(s) At the time of my PhD program at Purdue University (1960-64), no undergraduate control course in chemical engineering yet existed. Because I was fascinated with the area, I took a 4 or 5 course minor in EE (automatic control theory). That period coincided with the writing of the text Process Analysis and Control (1965) by my mentors Don Coughanowr and Lowell Koppel, the first popular undergraduate process control text. As eldest graduate student in the lab and resident analog guru, I had the privilege of programming many of the examples in their book using analog computer at some cost to my own research2. But when the opportunity came to build the first undergraduate process control laboratory at UCSB (1966-70), the simple liquid level and stirred-tank heating processes used as examples in C&K showed up in physical form. Later, in writing the first edition of our own text, these processes naturally became examples. Virtually all of the work described above, up to writing the CACHE monographs, preceded the arrival of Dale Seborg at UCSB in 1976. We quickly established a collaboration that lasted essentially until my retirement. Much research with joint students came out of our partnership, and newer/commercial control equipment and experiments were added to our joint laboratory. But clearly the major outcome from our years together was the textbook, Process Dynamics and Control, co-authored originally (1989, 2003) with Tom Edgar, another Princeton PhD (active at a relatively well known school in Texas). Last year, a third edition was published--with Frank Doyle, yet another Princeton PhD, as fourth author—Seborg et al, 2010. (Below)

Spread Sheeting & Plant Profitability Estimates: Overlooking Simplicity Hidden in Complexity Also left for the full presentation is a more complete discussion of discounted cash flow calculations via spreadsheet and the too-common practice of having students make numerous spreadsheet calculations, and explore multiple cases, all in a search for the optimum profitability operating point … but without taking away fundamental understanding. An example sketched

                                                                                                               2  No  reason  to  feel  sorry  for  the  author;  the  authors  of  SEM(D)  continued  the  noble  tradition  of  using  graduate/undergraduate  student  forced  labor  to  produce  our  books.  Apologies  to  many  …  

Page 13: From Mainframes to Main Street - UCSB College of Engineeringdmell/documents/2010...and in computer data acquisition/control applications … or not, living and working during that

11  

out below illustrates that we have come to a situation in which computation has become so easy, and computing power so ubiquitous, that key insight and simplifications can be missed. The DCF table (spreadsheet below) was taken from Douglas (1985), Example 4.2.

An analytical expression for Net Present Value at time of Start-Up, NPV0 [with Discount Rate = CR (Capital Rate) = FR (Finance Rate)], is developed immediately below. If      TCI  =  

a−3FC 1+CR( )+3 + a−2FC 1+CR( )+2 + a−1FC 1+CR( )+1 + a0FC +WC + SU    

                         then  

NPV0 = TRc (ProfitBT )[ ] b+1 1+ FR( )−1 + b+2 1+ FR( )−2 + b+3 1+ FR( )−3 + ... + b+10 1+ FR( )−10[ ]      

                                       

+0.1TR FC + SU( ) 1+FR( )−1 + 1+FR( )−2 + 1+FR( )−3 + ...+ 1+FR( )−10[ ]    

                                         

+ TRc SU + SV( )−TCI[ ] 1+FR( )−10                 Here FC is the Fixed Capital; WC is Working Capital  =

αWCFC ; SU is Start-Up Capital =  

αSUFC ; SV is Salvage Value =

αSV FC ; TCI is the Total Capitalized Investment (with expression given above);

TR is the Tax Rate; TRc is the Complementary Tax Rate, 1 – TR; and the total operating lifetime of the plant is assumed to be 10 years.  

Page 14: From Mainframes to Main Street - UCSB College of Engineeringdmell/documents/2010...and in computer data acquisition/control applications … or not, living and working during that

12  

The above expression appears to be highly non-linear, if one focuses only on the many exponential terms that are present down to order -10. However, close inspection reveals that this result is a simple linear relation in the two key design quantities, i.e.,

NPV0 = a ProfitBT + b Fixed Capital where the coefficients a, b are simply constants composed of the design parameters contained in the many nonlinear expressions, since the design parameters, e.g., Tax Rate, are chosen and normally remain constant throughout a complete conceptual design exercise of the type we ask students to conduct. In a comprehensive analysis available on his website, the author shows that it is easy to develop a fully linear model to provide financial analysis of a plant design whenever the capital expenditures, revenues, and costs are factored “estimates,” as they are in all conceptual design studies. This analytical model can be manipulated, optimized, etc., practically without computational aids beyond a calculator. One can conjecture that this piece of fundamental insight has been overlooked for years, perhaps because a spreadsheet is so easy to generate and explore using applications software available on any personal computer. At this point, having discussed just a single example, it seems a good way to end a career with a reckless extrapolation from the particular to the general. As educators, we should spend more time developing fundamental insight and avoid complex approaches that solve problems but yield little understanding of the results, trade-offs, effects of key assumptions, risks associated with choosing to use the “best solution,” etc. It is worth recalling Jim Douglas’ message: “Let’s go back to being real engineers.” At least those of us ought to do that who regularly teach young engineers … and who have the insight.  References:  Coughanowr,  D.R.,  Koppel,  L.B.,  Process  Systems  Analysis  and  Control,  McGraw-­‐Hill,  NY  (1965).      Douglas,  J.M.,  Conceptual  Design  of  Chemical  Processes,  McGraw-­‐Hill,  NY  (1988).    Mellichamp,  D.A.,  Identification  and  Control  of  a  Gain-­Varying  Flow  System,  PhD  Dissert.  Purdue  University,  Lafayette,  IN  (1964).    Seborg,  D.E.,  Edgar,  T.F.,  Mellichamp,  D.A.,  Process  Dynamics  and  Control,  Wiley,  NY  (ist  Ed.  1989,  2nd  Ed.  2003).    Seborg,  D.E.,  Edgar,  T.F.,  Mellichamp,  D.A.,  Doyle,  F.J.,  Process  Dynamics  and  Control,  Wiley,  NY  (3rd  Ed.  2010).    Wong,  C.,  D.  Bonvin,  D.A.  Mellichamp,  and  R.G.  Rinker,  "On  Controlling  an  Autothermal  Fixed-­‐Bed  Reactor  at  an  Unstable  Steady  State:  IV  -­‐  Model  Fitting  and  Control  of  the  Laboratory  Reactor,"  Chem.  Eng.  Sci.,  38,  619-­‐633  (1983).    For  a  full  PowerPoint  version  of  this  paper  (67  slides  with  narrative),  go  to              

http://www.chemengr.ucsb.edu/people/faculty_d.php?id=24