modeling homogenous relaxation

The art is to manage the rate and speed (measured by a non-dimensional number – one of those deeply held math secrets engineers bandy about like social scientists bartering philosophical theories). The particular number in this case (that describes nothing in the physical world) is quite effected by the slightest change in temperature. Changes in temperature affect the rate and there’s a whole bunch of modeling that needs to be done to get this whole puppy optimized. Or something like that.

Theoretical and Computational Fluid Dynamics Laboratory
College of Engineering
University of Massachusetts Amherst
13 December 2010

diagram of flash boilingDr Kshitij Neroorkar’s defense was so smoothly delivered you’d have thought he’d done this a thousand times already. Who knows? Simulation of Flash-Boiling in GDI Injections with Gasoline-Ethanol Fuel Blends might be the kind of hard science topic where 1000 experiments are needed before you get to defend the phd! Being the lone, non-family-member representative of the social sciences present, “How much did you understand?” was the question-du-jour, post-defense. Here comes the test, huh? At least enough to recognize that Dr Neroorkar’s subject matter seemed very similar to Dr Shivasubramanian Golapakrishnan’s dissertation topic, which I distorted metaphorically in a previous blogentry: Language is a Fluid.  A big thanks, btw, to Dr Blair Perot, who read and questioned the two-way utility of my analogy:

“Since I understand the fluids, this analogy certainly helps me understand what is important to linguists. I am less sure about if it will help the other way around. Does it really help linguists understand/describe linguistics better to think in terms of fluids?” (I like how he cuts right to the chase!)


8 nozzle plumes merge

The site of Dr Neroorkar’s study is in the nozzle part of a fuel-injection system, so its a pretty small physical space.  Inside that wee tunnel all kinds of things are going on, one of them being flash-boiling: the violent explosion of liquid into steam (a gas). The better this explosion is controlled, the more usable energy one gets, but it is tricky to maximize the energy potential because, well, all kinds of things are going on! There’s a pressure drop where the fluid enters, certain processes that generate the growth of nucleation bubbles which start out teeny-tiny and expand until  they touch each other, and then these bubbles bursting into spray in a process called atomization. The art is to manage the rate and speed (measured by a non-dimensional number – one of those deeply held math secrets engineers bandy about like social scientists bartering philosophical theories). The particular number in this case (that describes nothing in the physical world) is quite effected by the slightest change in temperature. Changes in temperature affect the rate and there’s a whole bunch of modeling that needs to be done to get this whole puppy optimized.  Or something like that.

“Then we do some mathematical tricks”

HRM modelTurns out that with 8-hole injectors, the plumes of vapor generated from each hole merge in a way that needs to be taken into account, and this hasn’t actually been done before, or not so well/thoroughly or otherwise unequivocally established through parametric study. What is the difference, someone asked, from what Dr Gopalakrishnan did before? “Shiva didn’t couple them.”  Couple what? The nuances were definitely over my head here, but the two of them did use the same HRM model, which (as Dr Neroorker explained to me later) “assumes the liquid-vapor mixture is one substance, not separate.” Treating the fluid-gas mix as homogeneous rather than heterogeneous (as explained here right at my level) enables an epistemological framework in which the system will relax to equilibrium if given enough time. There are (apparently) problems with the assumptions of cavitation, and the degree of superheat figures in some crucial way, not to mention the influence of specific geometry (90% symmetric) and the composition of the periodic boundary conditions (sounds an awful lot like “context” to me).

I like the idea of "swirl injection" (the colors aren't bad, either).
I like the idea of "swirl injection" (the colors aren't bad, either).

Somehow, Dr Neroorkar put all that together in the first validated 3D simulation showing the geometry region, the residence time dominated region, and the vaporization time dominated region, and got a volatility distribution curve showing stuff that matters. With important limitations of course: laminar flows, empirical time scales relevant to one fluid not others, so on and so forth.


The best part (of course) was the celebration, where I got to pretend to blend in with the relaxing homogenous crowd of Indians (“convenience store not casino” as distinguished by Russell Peters) at Sneha & Kshitij’s cozy apartment. Except for Nidhi (who delivered all her laugh lines in Hindi so I couldn’t understand them), everyone stepped up to being blogged. Partha gave in pretty easy: “We aren’t cited that often.” I had a great conversation with Vikram, who informed me that “helium is helium,” and Upen, “Math is not context-dependent.” Bhooshan mildly admitted that there “are not so many more fundamental reactions to discover [in chemistry]”, which Upen amended, “until they are discovered!” I would have followed up on these topics except Ruchita chimed in, ” This is not the conversation I want to be having!” Oh alrighty then!

cutting the cakeSandeep, meanwhile, was focused: “Where is the biryani?” Pritish arrived a little late and took awhile to catch up, “She’s gonna use my name somewhere?” You know I was amused when Sneha told us “people used to think I was a boy.” And did I ever learn some gossip about somebody’s Victoria’s Secret!

The meal was awesome, the company grand, and the event momentous. Kshitij himself did the honors on the decadent chocolate mousse cake, announcing: “My job is done.”

Persisting In Place: A strategy of socioeconomic survival

Dr Arturo Osorio refines Florida’s popular “creative class” model from its static premises, turning the notion of a creative class from a thing (an aggregation of people who fit required characteristics and are rather singularly motivated) to an on-going, interactive, socially-dynamic “process whose potential emergence may or may not be sustained over time.” Osorio pulled an audience of seventeen on a late fall day to listen to him tell a tale of a town where personal actions and associations coalesced into creative class organizing that generated a range of positive consequences for the community that continues, today, to feed back into organizing and interpersonal/professional community ties.

Isenberg School of Management

University of Massachusetts Amherst

Organizational Strategy

Creative Community Co-Construction

Tooling around Nantucket over New Year’s weekend, I was struck by the sense of place evident in the care given to the landscape, not to mention our host’s keen interest in birding – a demonstrably popular island activity. Twin ethics of conservation and continuation, combined with a robust sense of humor, reminded me of the work of Dr Arturo Osorio, whose dissertation defense explored the intersection of economic geography, economic sociology, and strategic management as a town re-creates itself as a community of and for artists, composed of members who utilize local resources to co-construct themselves as a creative class.[1]

Finding_ArtLooking at a place (Easthampton, MA) through an integrated analytical lens, Dr Osorio applied a collaborative multi-firm network theory[2] in which relationships are interdependent with the environment (conceived broadly) and the environment (including the embedded and implicit relationships) is inseparable from any given company, firm, or business. This fluid and dynamic model disallows sharp divisions between, for instance, “the company” and “the market,” or “employees” and “residents” and those whose physical residence is beyond town lines but whose livelihood is firmly founded within the community. While organizations are purpose-driven, the core economic transactions are deeply social – interpersonal, cognitive, cultural, and political. All of the activities of a company and the community that hosts it are intricately intertwined.

Dr Arturo Osorio refines Florida’s popular “creative class” model from its static premises, turning the notion of a creative class from a thing (an aggregation of people who fit required characteristics and are rather singularly motivated) to an on-going, interactive, socially-dynamic “process whose potential emergence may or may not be sustained over time.” Osorio pulled an audience of seventeen on a late fall day to listen to him tell a tale of a town where personal actions and associations coalesced into creative class organizing that generated a range of positive consequences for the community that continues, today, to feed back into organizing and interpersonal/professional community ties.

Choosing to contribute to a place

The most interesting point that I found in Dr Arturo Osorio’s dissertation defense was a question his results raised about why people may tend to identify themselves more on the basis of language than of the place where they live. Such as speaking Spanish, for instance, rather than English. The matter came up in relation to limits on extending Dr Osorio’s findings to more urban, mixed areas, although it caused me to wonder about rates of bi- and multilingualism in/around Easthampton.  Language fluency is a separate indicator than skills – Easthampton has above average concentrations of people with skills that are recognized as creative regardless of industry, as well as an above average concentration of people with skills that are used in industries recognized as cultural or creative. I wonder if diversity of language can contribute to creativity? What Dr Osorio studied are the interrelationships of skilled people who consciously grew a creative culture by recognizing and validating the various skills everyone had to contribute, and interweaving them into a strong and vibrant economic community.

Unfolding_of_a_Creative_ClassDr Osorio supplements Florida’s depiction of the creative class, which has come in for its own share of criticism. Florida describes the creative class through a lens akin to the hard sciences, as a concrete thing composed of particular elements which, if put together according to the right equation will reliably reproduce the desired end result. Osorio’s view is more nuanced, recognizing the role of variation and emergence in modes of self-organization when elements catalyze in ways that are not necessarily predictable. Because Osorio is focused on the combination of social factors along with economic factors, he is able to highlight the ways in which individuals can cohere positive socioeconomic changes in specific civic locations over measurable spans of time.

“It takes a community to build a creative class”

~ Dr Arturo Osorio

Dr Osorio conducted an extensive participatory ethnography and a complex social network analysis to demonstrate the relationships among narrowly-defined cultural groupings and broadly-defined socioeconomic structures.  The sociality is not always visible, but operates nonetheless. While the generic public is presented with the closed doors of artists at work, the artists themselves engage each other vigorously on all manner of concerns, including finding common cause and mutual gain with other community groups, such as persons with disabilities. As one might expect, the closest relationships are formed on the basis of homophily – emotional affinities, shared values and perspectives on issues of mutual concern, and enjoyment of similar kinds of people and events.

DSCN0336 But, a crucial element in generating a creative class, artists in Easthampton reached out beyond these most comforting relationships to learn about the needs and concerns of different artists and other community members in diverse affinity groups. Then they all consciously used this knowledge to proactively strike up alliances and strategize agreements to satisfy everyone’s desire to live/work in a community that promotes their individual, independent ability to be a certain kind of person. One of the novel discoveries of Dr Osorio’s work is that the key question in Easthampton’s successful transformation from an old mill town to a thriving artistic community is that the key question motivating collaboration was not “Where do we want to go,” but rather, “Who do we want to be?”

The process was not free of conflict or contradiction, however the influence of the artists (a widely-inclusive category in Osorio’s frame) on the economy and standard-of-living in Easthampton is proving to be resilient and sustainable, because – as an organizational process – it was always ground-up, involving multiple instances of grassroots, indigenous effort that culminated in a process that, in retrospect, can be identified by normal science.  Dr Osorio calls it “a fragile plural phenomenon” in order to emphasize both the inherent organic quality of self-organization as well as the necessity of continuous nurturance and commitment if the collective benefits are to be retained over the long term. This can conceivably happen if town planners traditionalize the collaborative approach to problem-solving that has characterized the rise of Easthampton’s creative class to date.

Sound utopian?

Mutually_Co-constructing_processesWell, it is a small town in Western Massachusetts rather than a massive urban area.  “The Planning Dept is two people,” explains Osorio, who are “doing mediation not planning.” They accomplish so much, so effectively, “not through dictating policies but by addressing specific problems and issues as they arise and working them through collaboratively – which [is what] generates policy.”  Can this model be extended? I guess those are the experiments we all are waiting for.  Dr Osorio affirms, “…[creative class] cohesion can only be reached, not by dictatorship but by communication.”  An important question is the extent to which western Massachusetts is unique: few other places will meet similar contextual criteria that define this region (such as the proximity of several elite colleges, museums, historical/traditional work in the arts, etc).

As the committee hurled questions at Dr Osorio, DSCN0341_2it became apparent how momentous is the potential in his work. His chair commented on “the open-endness of what you’re doing” – a comment clarified by Daphne, another Management graduate student: “The ‘creative class’ is an empty signifier, you can fill it up in different ways.” This rather blows Richard Florida out of the water (IMHO). Instead of a precise configuration of ascribed statuses available mainly to the elite and those brilliant few from historically disenfranchised groups who manage to thread the needle and arrive in the top ranks, Osorio brings membership in the creative class within reach of all of us.  We just have to decide to begin working with each other, in specific and targeted ways that are rooted, anchored, and otherwise defined by a real physical place.  This may mean facing down racial antagonisms and divisions constituted by language/identity difference and infrastructural oppression. Dr Osorio’s dissertation research suggests the bridge is to build value and meaning into the physical, geographic place where you live or work.

[1] Richard Florida, 2007 also Gibson & Kong, 2005:542

[2] Miles, Snow & Miles 2005

Life in the Boundary Layer

Dr. Ambarish Karmalkar was careful not to be alarmist as he reported findings on experiments forecasting regional climate changes in Costa Rica and its neighbors. Dr. Karmalkar explains: “The frequency of temperatures in the future is something we have not experienced in the modern period.” In the case of Central America in general, and Costa Rica in particular, he was referring to a probable future increase in the average temperature of 3-4 degrees Celsius (roughly 5-7 degrees Fahrenheit) before the end of this century.

Geosciences (Climatology)
159 Morrill South, UMass

“I just want to congratulate Ambarish on a very nice thesis; I enjoyed reading it.”

~ Dissertation Committee Member Dr. Henry Diaz

I enjoyed the extremely detailed presentation too, but I must confess that chills ran up and down my spine on a few occasions. Dr. Ambarish Karmalkar was careful not to be alarmist as he reported findings on experiments forecasting regional climate changes in Costa Rica and its neighbors. Dr. Karmalkar explains: “The frequency of temperatures in the future is something we have not experienced in the modern period.” In the case of Central America in general, and Costa Rica in particular, he was referring to a probable future increase in the average temperature of 3-4 degrees Celsius (roughly 5-7 degrees Fahrenheit) before the end of this century. If this does not seem like a big deal, compare it to the temperature fluctuation that accompanies El Nino – a mere one degree – and all the weather we (US Americans) blame on that. Then imagine that already species are becoming extinct in the subtropical rain forests. The suddenly extinct (since 1989) Golden Toad, for instance, was once abundant in the Monte Verde Cloud Forest of Costa Rica.

Climate Change Predictions for Central America:

A Regional Climate Model Study

by Ambarish Karmalkar

Specifically, Dr Karmalkar’s dissertation research involved testing the reliability of the general circulation model that is used for regional climate modeling: PRECIS. He chose the region of Central America for a few specific reasons:

  1. more studies on biodiversity and climate change have been done in Costa Rica than anywhere else (so he has lots of material to compare and contrast in terms of results already collected)
  2. there is severe impact from changes in precipitation in the Yucatan (the ‘top’ or northern edge of Central America, dividing it from North America)
  3. Costa Rica meets the criteria for being a biodiversity hotspot: meaning it has a large number of endemic (local/native) plant species , and has “lost at least 70 percent of its original habitat.”

Dr Karmalkar’s paper will be published soon enough, I trust, and will give much more detail to those with deep knowledge about this kind of predictive mapping. For now I can only summarize, from a layperson’s perspective, the major points that I gleaned from his analysis. The PRECIS model works at two levels (atmospheric and on-the-ground) to try and predict the impact of climate changes on the selected global region.

Because PRECIS is measuring a part of the whole (a region of the earth, not the entire planet), it is a limited area model. This means a lot of the work of calculation has to occur at the boundaries – basically, at the edges or sides of the area. This involves figuring out the lateral boundary conditions (air and ground) and also the sea surface boundary conditions (especially its temperature). Dr Karmalkar ran two experiments (each one requiring seven months!) to confirm or deny the validity of PRECIS.  Basically, do its results match up with reality?  First, the baseline test involved validating whether the model could take information from the past and run through its algorithms to turn out a prediction matching what is actually happening now, in the present.  He plugged in 31 years worth of observed data from ongoing measurements made in real time from 1960-1990. Given these values, the PRECIS model successfully generated a ‘prediction’ that accurately described current conditions of temperature and precipitation.

Changes in Seasonal Rainfall a Serious Concern

central america wet and dry regions

I highlight preciptation because I realized that I have been thinking naively about climate change in terms of temperature alone, but it is the combined effect of increasing temperature with changes in amounts of precipitation that is of serious concern. PRECIS simulates surface air temperature correctly, although there was a long discussion about differing warm- and cold-biases of the comparison data sets – CRU and NARR – at low and high elevations. The PRECIS results seem to highlight these biases. Perhaps this information will help designers improve the modeling. Nonetheless, Dr Karmalkar and his advisors agreed, “despite the challenges of a topographically complex region, PRECIS is not doing a bad job simulating temperature.” However, it is the annual cycle of precipitation that most defines the climate of Central America. Historically, there have been two rainy seasons generating peaks of rainfall in June, and again in September-October, with a bit of a dip in-between (July-August).

PRECIS is underestimating the wet season by 40-50%. A higher resolution model will help improve the simulation, and there may be a problem with how the model simulates storms.  There are many interacting variables in this dynamic system, including mean annual sea level pressure, the subtropical high pressure systems (Atlantic and Pacific), low pressure in the eastern Atlantic NASH (North Atlantic Subtropical High) which defines the direction and speed of trade winds that carry the precipitation, effects from the Borealis force, sea surface temperature, and low level circulation of the atmosphere modified by the topography (mountains, valleys and such).

Comparing the Baseline and a Future Scenario

Once the baseline is established as accurate, its trajectory is run out to a point in the future without changing anything.  If things were to continue only along the path that has already been created (nothing added, nothing taken away), then a certain climate can be projected to the end of the 21st century. To actually get at prediction, that extension of the baseline has to be compared with a possible projected future which includes changes we can anticipate (such as percent increase in greenhouse gasses – increasing at a rate of 3% a year since 2000 – more than double the rate in the 1990s).

There is an official Intergovernmental Panel on Climate Change that created four different possible scenarios. Dr Karmalkar picked the scenario called A2, which comes with an associated “storyline” – the context of human activity that makes the numbers used in the scenario plausible. The A2 storyline is conservative: of the four choices it is the one that seems the most “like” the way our world really is, now:

…a very heterogeneous world with continuously increasing global population and regionally oriented economic growth that is more fragmented and slower than in other storylines.

In this story about our possible future, economic values outweigh environmental values, and regional development is pursued more than global strategies.

“There’s a cockroach.”

It is the difference between the two tests – the baseline and the potential scenario – that generates the actual prediction. The finding shows temperature becoming higher and the distribution narrower: the future “lies well outside the present day” and “that,” says Dr Karmalkar, “is a significant result.” Remember that long discussion about bias?  The results for all regions show a cold bias – which means (if I understood this correctly), that the prediction itself is conservative, i.e., that the reality could well be worse than these particular results predict. Warming in Central America is higher than the global average. Not only this, but the wet and dry seasons in Central America are going to be seriously effected. The model isn’t doing as well with precipitation as it is with temperature, but – even limping – what it suggests is grim.  Basically, amounts of rainfall during the wet season are going to decrease, some areas might even lose one of the rainy seasons entirely. In other areas, perhaps the second wet season will be extended and last longer, enabling a small increase in precipitation, but the overall loss of rainfall over the sea will trigger other effects, shifting pressure systems, decreasing sea level pressure and strengthening trade winds – all of which will decrease precipitation.

Horizontal precipitation

It gets worse.  Dr Karmalkar did not say that. He would not.  He represented the science calmly, engaging an impressive display of slide jujitsu by answering every question posed during the defense with a quick scroll through his hundred (or more) back-up slides, pulling up the exact one to respond with precision to every query.

One of the most important sources of precipitation in Central America comes from clouds. The landscape orographic cloud formationincludes tall mountains that touch the clouds: moisture condenses directly onto the vegetation. (This is where the Golden Toad used to live.) Twenty to 22% of the total annual precipitation in Costa Rica comes from this direct source of moisture. Clouds form as a function of relative humidity, which is a function of temperature and pressure. Can you guess?  The temperature goes up, which draws the ‘ceiling’ of relative humidity up too.  Clouds no longer form at the usual altitude, but higher up.  Bye bye horizontal precipitation.  What killed the Golden Toad?  Possibly a phenomenon called moisture stress.

No Time to Lose

Again, this is my voice, not Dr Karmalkar’s.  When pressed by his committee whether “it is appropriate at this point to press the alarm and get the word out to conservation organizations and such?” Dr Karmalkar responded:

“Yes, we do have enough information to, maybe not press the alarm, but enough to say that something needs to be done…the Golden Toad disappeared in 1989, its population dramatically declined after the El Nino phase of 1986-87. If you look at the temperature anomalies of El Nino, they are only of a degree or so. If one degree of change is effecting the species in the area, then certainly four degrees warming is definitely large.

One of the other important things is that species do adapt to changes in climate. There are cases where plant species have migrated upslope, but that’s constrained by topography. In some cases, I talked of the cloud base heights going up, but another problem is deforestation, which has led to an increase in surface sensitive heat flux. Land surface use alone can drive cloud bases even higher than the highest mountain peak.

We do have information to make the case that climate change of this magnitude might be serious.”

Language is a fluid (Part 1)

Theoretical and Computational Fluid Dynamics Laboratory
College of Engineering
UMass Amherst

A few days before his defense, the very-soon-to-be-Dr. Shiva promised to make his phd defense as incomprehensible to a non-engineer as possible. He was teasing me, but it opens space for me to play with representing his work not only on its own terms, as I have tried to do with other friend's dissertations. In this "Part 1" post, I've selected items from Dr. Shivasubramanian Gopalakrishnan's defense that enable me to play with fluid dynamics as an analogy for language-based communication dynamics. My not-so-hidden-agenda is to attempt a translation between disciplines that might serve as an impetus to potential collaborations for addressing cross-disciplinary problems (the global type, interwoven across institutional fields, such as climate-change, grinding poverty, and widespread starvation, to name a few).

“Modeling of Thermal Non-Equilibrium in Superheated Injector Flows”

Dr Gopalakrishnan’s area of specialization is non-equilibrium phase change operations. The basic phase change he studied for his dissertation involves the change of liquid fuel into gas vapor in automobile and aircraft fuel. There are a whole ton of things that need to happen in order for a fuel to provide adequate power to an engine so that a car or plane can travel, and a fair number of things that can go wrong in the attempt, such as flash boiling and vapor lock. The engineers know all about these problems, but I had to do a bit of research. A liquid boils, for instance, not only as a function of temperature, but also as a function of pressure. Suppose one thought of a linguistic flash boil as the interaction of

    a) a word’s definition (its ‘temperature’) and

    b) the context in which the word is uttered (the environmental ‘pressure’).

Right word, right context: everybody happy.
Right word, wrong context: problem!
Wrong word, right context: just a goof.
Wrong word, wrong context:
potential domestic disturbance or international incident!

Suppose we were able to slow down social interaction to 2000 frames per second (like this water droplet) in order to perceive how a single word enters language (and thus communication) as a whole?  Most people tend not to think much about the language we use unless/until something goes wrong, and then our energies focus upon repair. If we could cultivate more consciousness about how (for instance) individual word choices merge with larger pools of language use, then we might be able to diagnose discourse patterns and even design ways of communicating that work more efficiently in developing and implementing ideas that solve real-world problems.

In terms of the analogy I’m proposing here, Snapshot 2009-11-17 18-22-14how or when do words conserve mass and momentum without changing the substance or direction of established discourse or social patterns?  When and how might particular words conform to the dictates of conservation while also accomplishing an alteration in substantive conditions that generates new forms of dialogue?

Vapor lock is not such a problem for cars anymore, but it remains a challenge for aircraft. Both issues involve the liquid becoming gas too soon. With flash boiling, part of the liquid fuel – but not all of it – superheats, leading to a two-phase (and thus inefficient) distribution of energy. With vapor lock, the bulk of the liquid vaporizes before practical use – also due to combinations of pressure and temperature. Vapor lock can cause a severe drop or even a complete stall in power. Not what you want to happen at high altitude! Nor in a conversation that you wish to proceed smoothly, for whatever reasons.

Suppose you need to talk with someone who uses a different language than you. A phase change is necessary for communication to occur. Suppose an interpreter (professionally trained, fluent in both languages) is available to transform the ‘fuel’ provided by your language into ‘power’ in the other language? This would be a phase change, yes? Keep in mind that in scientific categorization, liquids and gasses are both fluids – they belong to the same medium. Similarly, English and Turkish, Spanish and Hindi, Malaysian Sign Language and Langue des Signes Française are all examples of the medium of language. The question of efficiency in fluid phase change is comparable to the question of comprehension in interpretation: the challenge is to identify the relevant factors and manipulate the conditions so that the interaction occurs with the least loss. In fluid heat exchange, one considers the

  1. rate of downstream atomization, the
  2. starting point of the phase change – its location within the nozzle, the
  3. extent to which dispersion continues outside of the nozzle, the
  4. endpoint of phase change, and (finally) the
  5. overall emission characteristics: a comprehensive image, if you will, of what is happening when, where, and how that involves all interacting elements and environmental conditions.


One can surmise that in addition to the environmental conditions of temperature and pressure, timing is crucial for effective fluid dynamic engineering! Time comes first in the list above (rate), requiring us to imagine the complicated system in four dimensions. Temporality is also one of the more obvious constituents of interpretation, as people using interpreters to communicate across language differences often express concern with the amount of time required for the interpreter to process the ‘injection’ before manifesting ’emissions’. In aircraft, the particular mechanism that Dr Gopalakrishnan studied involved using the fuel system itself “as a heat sink to increase engine performance.”

Paralleling the practical application of a heat sink with interpretation, the question of efficiency involves the extent to which an interpreter dissipates the hot air, absorbing or otherwise deflecting excess energy that distorts the equilibrium of the relational exchange. This cooling effect of the interpreter is not intended to minimize an interlocutor’s intended meaning (a common concern), but rather, to enable the potential energy (one could say, the understanding) to be most efficiently utilized in whatever power application (voice – Blommaert: ‘the capacity for semiotic mobility’ (p. 69)) is called for: a sudden increase in speed (e.g., for emphasis), or a gradual drop in tone (perhaps to shift a debate from argumentation to persuasion).

Dr Gopalakrishnan’s work zeroed in (among other things) on the relationship between pressure and enthalpy. In terms of vaporization, enthalpy is “the energy required to transform a given quantity of a substance into a gas.” For some reason (unknown), the energy required by interpreters to transform language through a similar phase change operation seems expected not to change the substance. Liquid should not become gas! (Despite that they are still both fluids.) Put another way, the diction (discrete word choice) seems expected not to change despite the phase shift from one language to another!  This is akin to expecting, with fuel, that the molecules of the resulting gas would remain exactly the same as the molecules of the original liquid: in which case, no energy would be produced at all, as there would have been no reaction.

Based on everyday experience, language “is incompressible” (as Dr Schmidt teased when I posed my analogy to him), yet – ironically? – there seems to be widespread social conditioning about languages that presumes an interpreter is magically able to perform phase changes (interpreting from one type of language/medium to another type of language/medium) without effects from environmental conditions. Occupational health and safety evaluations, not to mention professional lore and training, reveal that communicators in a cross-language interaction do need to consider

a) the capacity of the interpreter to store extra heat/energy (technically, thermal inertia) generated by interlocutors and

b) the potential for long-term damage to interpreters (and thus, the communication system) by constraints imposed by conditions of ‘social temperature’ and ‘social pressure’ (which can show up, in fluid dynamic terms, as cavitation).

Often, when the complex realities of language-to-language interpretation are surfaced, the fallback position is to eliminate the need for interpretation. “Get everyone using the same language.” Instead, I want to suggest that there are tremendous benefits to embracing the need for interpretation as an opportunity for highlighting precisely those areas and moments of greatest difference and thus of challenge. When communication appears to fail or feels inadequate, this can be taken as an indicator to those involved that the interaction potential has shifted from a single/shared perspective to a fuller range of views – which, if utilized, may suggest greater/deeper capacities and efficiencies.

One of Dr Gopalakrishnan’s innovations was to apply two different sets of equations to the problem of fuel injection efficiency. Shiva'sLISA2By coupling mechanisms that perform distinct tasks in different domains, Dr Gopalakrishnan was able to generate new knowledge about the overall process which will likely lead to improvements in efficiency. In a similar spirit, I seek to draw on (admittedly limited) paradigmatic knowledge from engineering about fluids with paradigmatic knowledge from the humanities about language. This task necessarily involves translation between the two disciplinary languages. To be successful, co-learners will have to want to make the effort to move beyond disciplinary monolingualism. I hope the compelling problems of our time provide sufficient motivation for trying to bridge the segregation.

In a way, interpreters are always trying to apply “two different sets of equations” to the problem of efficient communication. These are the ‘equations’ of culture and language particular to each communicator. The unique aspect of interpreting (as a complex system involving the rapid combination of distinct tasks across domains with an ever-changing mix of elements), is that the people involved also have power to interpret – and re-interpret – the conditions. Unlike fluid dynamics, where the ‘temperature’ and ‘pressure’ are given factors of the environment (fixed, stable, presumedly controlled/controllable), individuals in a communication process can always choose to maintain or change the context: to alleviate or increase the pressure, to drop or raise the temperature, to decide that any word – ‘right’ or ‘wrong,’ even if it generates vapor lock or superheating – can be worked with and turned to productive use. This takes effort, of course, and requires collaboration – therein lies the rub!

Coming up in Part 2: the challenge to traditional models of superheating fluids that only consider instability-based modes of breakup, the question of size vs quantity, and void fractions.

working the system: market enforcement of emission standards

Resource Economics
Stockbridge 217, UMass

Dr Linus Nyiwul’s dissertation defense was conducted almost exclusively in the language of math, with very little generic English explanation for the non-resource management layperson. So I cannot write very much about it, except that it was obvious that his faculty members are excited about the potential of this framework Dr Nyiwul has created for government regulators to exploit market mechanisms by leveraging emissions standards against the needs of firms to attract investors.
There are a couple of premises that Dr Nyiwul builds upon, including a perception that investors would prefer to put their money into “green” companies, and evidence that companies who improve their own environmental management systems experience increases in stock value (e.g., Feldman 1996). Dr Nyiwul described a whole lot of complicated stuff that needs to be properly balanced:

  • setting a standard,
  • needing to monitor to ensure companies are meeting the standard,
  • keeping the cost of monitoring low enough to be reasonable (for government) while
  • making the threat of monitoring real enough that companies prefer to comply rather than risk being caught and having to pay the penalty.

LinusGRAPH.jpgSomehow all those things get crunched through some equations that calculate

  1. “marginal damage” (whatever this means! it apparently refers wholistically to “society”) and
  2. monitoring costs (to the government) and
  3. costs of compliance (for the firms)

…. now, where it gets real interesting is when the government establishes two emissions standards: a regular standard (the minimum to be deemed “in compliance” and avoid penalties) and an overcompliance standard – which would earn a special certification proving uber-greenness (or something en route to such glorified status). There is pilot project currently underway, the National Environmental Performance Track (NEPT), which has weaknesses but whose results – plugged into Dr Nyiwul’s equations – demonstrates that TWO STANDARDS IS GOOD POLICY! Not to mention that firms which earn the overcompliance certification have a special marketing asset to appeal to investors. (They have to meet the minimum “regular” standard first, then apply and demonstrate accomplishment of the overcompliance standard.)

There was some fancy problem-framing, as Linus described one finding, saying that it came about in one way if you set the problem up this way, and comes about in another way if you set the problem up that way. (I love the fact that subjectivity can be found in math!) There are some issues with firms getting to self-report emissions (apparently without verification, unless the regulator goes to conduct the actual monitoring?) And there was quite a discussion about looking at the problem endogamously: with free entry into and out of the market. And output and size effects really matter (but cannot be reversed) in terms of the direct and indirect effects of enforcement costs. Yea, I don’t really know what those sentences mean in “real” economic terms, but there may be other things in play at times which can lead to inconclusive results.
but…. drumroll please! Dr Linus Nyiwul concludes, and his faculty agree:

“An optimal tax rate is smaller than the social marginal damage for a fixed n and no market imperfections.”

The challenges that issue forth from Dr Nyiwul’s work include (in no particular order):


  1. identifying which are the important uncertainties (given that anything could be uncertain except for whatever is under direct regulatory monitoring)
  2. defining clearly what “overcompliance” means (if “compliance” means paying the right tax, i.e., reducing emissions in order to minimize tax…. does overcompliance move a firm into a “credit” situation?)
  3. how to extend the framework from a single firm to an industry
  4. identifying how the framework as it is fits within known policy issues and concerns, and
  5. extending the frame beyond emissions to look at a lot of other policy issues.

How COOL is your seafood?

Resource Economics
UMass, Amherst

For her final oral examination for a Ph.D in Resource Economics, Siny Joseph presented an analysis of Country of Origin Labeling (COOL) for seafood. I echo the words of the external member of her committee, who said,

“After reading this paper, I pay more attention to my seafood.”

Dr Siny Joseph’s field is I.O. Economics – a term that I had to Google after the defense! My complete ignorance of the jargon in this field should alert you to the high probability that I have misconstrued or misunderstood major elements of her work. I will do my best to summarize and hope for correcting comments as needed.

Extrapolating from the wikipedia entry and my limited exposure to other disciplines, Industrial Organization explores the economic interaction between two dynamic forces:

  1. the strategic behavior of firms (which I believe is the purview of my friends specializing in strategic management) and
  2. the structures of markets (statistical analysis like I’ve never seen!)

Given my lowest-score-in-the-cohort competence in all things math, most of the substance of Siny’s analysis and discussion with her Committee Members occurred in a language I cannot even pretend to understand: replete with “k-bars,” and K’s with subscript L’s and H’s, “thetas” and fixed parameter values composing profit maximization formulas… Go grrl go! Her findings, however, were described in comprehensible English – and they are fascinating.
Siny answering a question.jpg
Seventy percent of seafood purchased by consumers in the U.S. is imported; of these imports, 80% comes from less developed countries. COOL (Country of Origin Labeling) is legislation introduced in the 2002 Farm Bill, and implemented with seafood in 2005, with the idea that food quality and food safety are linked with where the food originates. Coincidentally, COOL is being extended to more foods this year with continuing debate over exemptions and on-going criticism of delays, making Dr Joseph’s research findings immediately relevant. Regarding seafood, huge sectors are exempt: restaurants and other food service providers, specifically, and products deemed to be “processed.” In general, then, COOL applies to the seafood you buy in a grocery store or market to cook at home.
It seems the first major task in an I.O. economic analysis is to define the boundary between what is included and what is excluded from the study. Siny focused on the US market, presumably because the boundaries could be readily established. (In a case study on shrimp, she explained the distinction between a “covered” and “uncovered” market, explaining she’d had to go with the former – specifically an undifferentiated market – because the mathematical expressions for the latter were unmanageable. Basically (I think!) this means using idealized equations rather than ones more representative of real life.) Generally, Americans will assume that seafood of domestic origin is of higher quality than seafood of foreign origin, and consumers are most willing to pay the costs of labeling during and immediately after food scares – so that they (we, smile) can make (at least) this basic differentiation.
But (I kept thinking to myself) – labeling after a scare doesn’t do much to protect consumers during the scare and of course has no contribution to risk prevention whatsoever. So why isn’t labeling just done, as a matter of business habit? “Because,” Dr Joseph explained, “firms can masquerade low quality seafood as high quality when consumers don’t have all the information, and that’s where the profit comes from.” She and her committee members debated nuances of the statistical measurements, recommending and justifying choices of particular statistical tools, but did not question Siny’s basic finding that (now, with only three years of info available) the greatest profit comes under what’s called “voluntary COOL” (which does occur with some seafood products), followed by partial implementation of COOL (the status quo), and drops the lowest under “total COOL” – an ideal she recommends because “real consumption is greatest when there is full implementation of COOL.”
The rub for me during the whole presentation is the use of this indicator called WTP: Willingness to Pay. What I’d like to see is a complementary WTP2 (squared) equation: Willingness to Profit. Somehow the whole debate seems framed with WTP2 as an unquestionable given – companies have the inalienable right to maximize profit and consumers have to pay for safety. It just strikes me as wrong; at least out-of-balance. Firms can afford to pay much more than any individual can! Anyway, Siny’s Committee engaged vigorously with her findings: “I like the story you’re trying to tell,” said a professor by speakerphone, wondering about pursuing the angle of diversion, and all of them wondering about policy recommendations based on these findings.
There was a measure of “Total Welfare” that supposedly mixes the best consumer outcome with the best business outcome…. and Dr Joseph did present some evidence that companies would label voluntarily under certain/specific conditions (of known/demonstrated consumer demand?), but for the most part companies are trying to duck this completely. For instance, shrimp traders are required to label unprocessed shrimp, so they would rather do something that qualifies as “processing” in order to avoid labeling. Doesn’t it cost to do that, too? Honest – I get very confused! Why is one type of cost preferable to another? I think someone needs to institute an equation such that consumer WTP cannot exceed 1/2 the square root of the actual incurred cost apportioned over the entire volume in order to somehow link a decrease in the firm’s WTP2 (willingness to profit) with the increase consumers are willing to pay. (Which is probably why I’m not an economist.)
Siny's graph.jpg
Nonetheless, even if the current data is not totally amenable to a single clear and concise argumentative point, I definitely agree with Siny’s committee member: “I like your plan of attack.” I want to be able to argue convincingly that the government (through legislation) should be on the consumer’s side – not only in the grocery store, but I would also like to be able to confirm the quality of seafood purchased in restaurants.
Keep it up, Dr Siny Joseph!

Industrial Organization, Wikipedia
Market coverage strategy,

Anuj in a suit

Human Performance Laboratory
Department of Mechanical and Industrial Engineering
E-Lab II
University of Massachusetts

46 glance points.jpg

The forty-six “glance points” represented in this graph illustrate eye gaze tracking during driving. (Now!) Dr Anuj Pradhan has been crucial in co-developing the RAPT novice driver training in risk perception over the course of a six-year doctorate degree and four experiments. Risk Perception and Awareness Training combines simulation and field techniques for assessing new drivers’ scope and skill in anticipating potential risks while driving.
Did you know?

  • Car accidents are the leading cause of death for teens in US
  • Teenagers, during the first six months of driving, have an eightfold increase in the risk of dying in a car crash
  • Teenagers, in general, are four times more likely than older drivers to die in a car crash
  • In numbers: teenagers are involved in 4.7% of the six million crashes annually in the US but compose 13% of the fatalities

Previous research has identified three main causes of teenage accidents, including failure to adjust speed appropriately to conditions (20.8%), failure to maintain attention to the task (23%), and – the biggest – failure to conduct an appropriate search of the driving environment (42.7%).
After his presentation, Dr Pradhan’s Dissertation Committee gave him some grief about the distinction he wants to draw between “tactical scanning” and “strategic scanning.” (They also asked him, right at the beginning, to take off his suit jacket and relax. This may have been the signal that they planned to heat up the room…!) The first question, however, came from one of the faculty during the presentation, and it involved clarifying the dependent variable of eye movement. Dr. Pradhan’s first experiment established a correlation between the recognition of risk (seeing it) and the knowledge that risks may be present (use of eye gaze to scan in order to identify (i.e. see) them if they are present).
Two more experiments refined the technique for linking eye movement with perception and recognition of risk. Results from the three experiments indicate improvements in visual search behavior in all driving situations, from the benign – when no risks are present, to situations with a minimal possibility of risk, and on up to situations with obvious dangers.
In other words, the students and volunteer test subjects who participated in these experiments learned about the strategic need for constant maintenance of visual attention across the broad driving environment which might require the driver (i.e., me – or you!) to engage in specific tactical behaviors in order to reduce risk – or be able to implement evasive action should a risk materialize because one has seen it in time! My contribution came with the fourth experiment, I got to test out the version in development – my experience (as an “older driver,” grin) may or may not have aided in refining the program, but it certainly reinforced for me that there is a purpose to where, when, and why I look and watch in the ways that I do while driving. (I learned that I could still do better!)
The need for this kind of training tool in driver’s education programs everywhere is immediately and obviously apparent. I was also fascinated by the application of temporal and spatial algorithms to the eye movements captured by the Mobile Eye movement tracker. Time and space coordinates for every eye movement had to be combined and crossreferenced in a Fixation Identification Algorithm with prior and subsequent eye movements in order to define a glance. These glances are then superimposed on the objects in the driver’s visual range, and categorized as on-road or off-road. In this way, the Mobile Eye Tracker pinpoints whether the driver’s eye looked directly at the truck parked on the side of the road in front of a passenger crosswalk, when (from near or far), and for how long. Does the gaze return or simply pass on to other objects?
In other words, the direction of eye gaze can indicate the driver’s perception of risk – or lack of it. Once a driver is informed of their own eye movement behavior, then their awareness of risk is enhanced (or should be, I think the larger research program of the Human Performance Lab is lacking a necessary qualitative element). In fact, after training in the tactics of using visual scanning to perceive the possibility of risk, Dr. Pradhan shows that drivers improve risk awareness in four significant ways:

  1. Trained drivers maintain a wider horizontal range of vision
  2. Trained drivers shift half their glances offroad, more trained looking to right – where more risks presumedly originate (compared with the untrained who look left & right more-or-less evenly)
  3. Trained drivers glance off-road for slightly longer times (presumedly considering the extent to which the conditions in sight compose/obscure a risk or not)
  4. Trained drivers learn not only to transfer recognition of risk types between similar scenarios, but also transfer the skill of tactical scanning to different scenarios than those they were exposed to during training

Throughout the presentation, I kept thinking, “if only” – if only I had had this knowledge five years ago — the language of “visual scanning,” “risk perception,” and “risk awareness” — then Hunju’s driving practice might have gone more smoothly for both of us!
Anyway, Anuj’s defense rolled along. Dr Krishnamurty pressed him on the relevance or distinction between top-down and perspective views, which Dr. Pradhan handled with aplomb: “I got you, excellent answer.” No wonder Jeff calls Anuj, “my Yoda.” The (self-named) Curmudgeon wouldn’t let go of the tactical/strategic distinction but I wager this is merely ground for the next stage of hypothesis testing and theory building. The Committee Chair, Dr Fisher, supported Anuj throughout. They grilled him for a mere quarter of an hour after kicking out us observers (selected members of the fan club). And then they only made him wait for about that much longer (or less) before Dr Fisher came out and ushered him back in with a handshake and announcement:

You’re done!”


The Younger Driver: Risk Awareness and Perception Training, Human Performance Laboratory, UMASS Amherst
Using Eye Movements To Evaluate Effects of Driver Age on
Risk Perception in a Driving Simulator
by Anuj Kumar Pradhan and five others
glance, Merriam-Webster Online Dictionary
Fixation-identification in dynamic scenes: comparing an automated algorithm to manual coding, Proceedings of the 5th symposium on Applied perception in graphics and visualization
Driver’s License, Reflexivity