Viewing Tag
group dynamics

Page 5 of 53« First...34567...102030...Last »
Geosciences (Climatology)
159 Morrill South, UMass

“I just want to congratulate Ambarish on a very nice thesis; I enjoyed reading it.”

~ Dissertation Committee Member Dr. Henry Diaz

I enjoyed the extremely detailed presentation too, but I must confess that chills ran up and down my spine on a few occasions. Dr. Ambarish Karmalkar was careful not to be alarmist as he reported findings on experiments forecasting regional climate changes in Costa Rica and its neighbors. Dr. Karmalkar explains: “The frequency of temperatures in the future is something we have not experienced in the modern period.” In the case of Central America in general, and Costa Rica in particular, he was referring to a probable future increase in the average temperature of 3-4 degrees Celsius (roughly 5-7 degrees Fahrenheit) before the end of this century. If this does not seem like a big deal, compare it to the temperature fluctuation that accompanies El Nino – a mere one degree – and all the weather we (US Americans) blame on that. Then imagine that already species are becoming extinct in the subtropical rain forests. The suddenly extinct (since 1989) Golden Toad, for instance, was once abundant in the Monte Verde Cloud Forest of Costa Rica.

Climate Change Predictions for Central America:

A Regional Climate Model Study

by Ambarish Karmalkar

Specifically, Dr Karmalkar’s dissertation research involved testing the reliability of the general circulation model that is used for regional climate modeling: PRECIS. He chose the region of Central America for a few specific reasons:

  1. more studies on biodiversity and climate change have been done in Costa Rica than anywhere else (so he has lots of material to compare and contrast in terms of results already collected)
  2. there is severe impact from changes in precipitation in the Yucatan (the ‘top’ or northern edge of Central America, dividing it from North America)
  3. Costa Rica meets the criteria for being a biodiversity hotspot: meaning it has a large number of endemic (local/native) plant species , and has “lost at least 70 percent of its original habitat.”

Dr Karmalkar’s paper will be published soon enough, I trust, and will give much more detail to those with deep knowledge about this kind of predictive mapping. For now I can only summarize, from a layperson’s perspective, the major points that I gleaned from his analysis. The PRECIS model works at two levels (atmospheric and on-the-ground) to try and predict the impact of climate changes on the selected global region.

Because PRECIS is measuring a part of the whole (a region of the earth, not the entire planet), it is a limited area model. This means a lot of the work of calculation has to occur at the boundaries – basically, at the edges or sides of the area. This involves figuring out the lateral boundary conditions (air and ground) and also the sea surface boundary conditions (especially its temperature). Dr Karmalkar ran two experiments (each one requiring seven months!) to confirm or deny the validity of PRECIS.  Basically, do its results match up with reality?  First, the baseline test involved validating whether the model could take information from the past and run through its algorithms to turn out a prediction matching what is actually happening now, in the present.  He plugged in 31 years worth of observed data from ongoing measurements made in real time from 1960-1990. Given these values, the PRECIS model successfully generated a ‘prediction’ that accurately described current conditions of temperature and precipitation.

Changes in Seasonal Rainfall a Serious Concern

central america wet and dry regions

I highlight preciptation because I realized that I have been thinking naively about climate change in terms of temperature alone, but it is the combined effect of increasing temperature with changes in amounts of precipitation that is of serious concern. PRECIS simulates surface air temperature correctly, although there was a long discussion about differing warm- and cold-biases of the comparison data sets – CRU and NARR – at low and high elevations. The PRECIS results seem to highlight these biases. Perhaps this information will help designers improve the modeling. Nonetheless, Dr Karmalkar and his advisors agreed, “despite the challenges of a topographically complex region, PRECIS is not doing a bad job simulating temperature.” However, it is the annual cycle of precipitation that most defines the climate of Central America. Historically, there have been two rainy seasons generating peaks of rainfall in June, and again in September-October, with a bit of a dip in-between (July-August).

PRECIS is underestimating the wet season by 40-50%. A higher resolution model will help improve the simulation, and there may be a problem with how the model simulates storms.  There are many interacting variables in this dynamic system, including mean annual sea level pressure, the subtropical high pressure systems (Atlantic and Pacific), low pressure in the eastern Atlantic NASH (North Atlantic Subtropical High) which defines the direction and speed of trade winds that carry the precipitation, effects from the Borealis force, sea surface temperature, and low level circulation of the atmosphere modified by the topography (mountains, valleys and such).

Comparing the Baseline and a Future Scenario

Once the baseline is established as accurate, its trajectory is run out to a point in the future without changing anything.  If things were to continue only along the path that has already been created (nothing added, nothing taken away), then a certain climate can be projected to the end of the 21st century. To actually get at prediction, that extension of the baseline has to be compared with a possible projected future which includes changes we can anticipate (such as percent increase in greenhouse gasses – increasing at a rate of 3% a year since 2000 – more than double the rate in the 1990s).

There is an official Intergovernmental Panel on Climate Change that created four different possible scenarios. Dr Karmalkar picked the scenario called A2, which comes with an associated “storyline” – the context of human activity that makes the numbers used in the scenario plausible. The A2 storyline is conservative: of the four choices it is the one that seems the most “like” the way our world really is, now:

…a very heterogeneous world with continuously increasing global population and regionally oriented economic growth that is more fragmented and slower than in other storylines.

In this story about our possible future, economic values outweigh environmental values, and regional development is pursued more than global strategies.

“There’s a cockroach.”

It is the difference between the two tests – the baseline and the potential scenario – that generates the actual prediction. The finding shows temperature becoming higher and the distribution narrower: the future “lies well outside the present day” and “that,” says Dr Karmalkar, “is a significant result.” Remember that long discussion about bias?  The results for all regions show a cold bias – which means (if I understood this correctly), that the prediction itself is conservative, i.e., that the reality could well be worse than these particular results predict. Warming in Central America is higher than the global average. Not only this, but the wet and dry seasons in Central America are going to be seriously effected. The model isn’t doing as well with precipitation as it is with temperature, but – even limping – what it suggests is grim.  Basically, amounts of rainfall during the wet season are going to decrease, some areas might even lose one of the rainy seasons entirely. In other areas, perhaps the second wet season will be extended and last longer, enabling a small increase in precipitation, but the overall loss of rainfall over the sea will trigger other effects, shifting pressure systems, decreasing sea level pressure and strengthening trade winds – all of which will decrease precipitation.

Horizontal precipitation

It gets worse.  Dr Karmalkar did not say that. He would not.  He represented the science calmly, engaging an impressive display of slide jujitsu by answering every question posed during the defense with a quick scroll through his hundred (or more) back-up slides, pulling up the exact one to respond with precision to every query.

One of the most important sources of precipitation in Central America comes from clouds. The landscape orographic cloud formationincludes tall mountains that touch the clouds: moisture condenses directly onto the vegetation. (This is where the Golden Toad used to live.) Twenty to 22% of the total annual precipitation in Costa Rica comes from this direct source of moisture. Clouds form as a function of relative humidity, which is a function of temperature and pressure. Can you guess?  The temperature goes up, which draws the ‘ceiling’ of relative humidity up too.  Clouds no longer form at the usual altitude, but higher up.  Bye bye horizontal precipitation.  What killed the Golden Toad?  Possibly a phenomenon called moisture stress.

No Time to Lose

Again, this is my voice, not Dr Karmalkar’s.  When pressed by his committee whether “it is appropriate at this point to press the alarm and get the word out to conservation organizations and such?” Dr Karmalkar responded:

“Yes, we do have enough information to, maybe not press the alarm, but enough to say that something needs to be done…the Golden Toad disappeared in 1989, its population dramatically declined after the El Nino phase of 1986-87. If you look at the temperature anomalies of El Nino, they are only of a degree or so. If one degree of change is effecting the species in the area, then certainly four degrees warming is definitely large.

One of the other important things is that species do adapt to changes in climate. There are cases where plant species have migrated upslope, but that’s constrained by topography. In some cases, I talked of the cloud base heights going up, but another problem is deforestation, which has led to an increase in surface sensitive heat flux. Land surface use alone can drive cloud bases even higher than the highest mountain peak.

We do have information to make the case that climate change of this magnitude might be serious.”

Theoretical and Computational Fluid Dynamics Laboratory
College of Engineering
UMass Amherst

A few days before his defense, the very-soon-to-be-Dr. Shiva promised to make his phd defense as incomprehensible to a non-engineer as possible. He was teasing me, but it opens space for me to play with representing his work not only on its own terms, as I have tried to do with other friend's dissertations. In this "Part 1" post, I've selected items from Dr. Shivasubramanian Gopalakrishnan's defense that enable me to play with fluid dynamics as an analogy for language-based communication dynamics. My not-so-hidden-agenda is to attempt a translation between disciplines that might serve as an impetus to potential collaborations for addressing cross-disciplinary problems (the global type, interwoven across institutional fields, such as climate-change, grinding poverty, and widespread starvation, to name a few).

“Modeling of Thermal Non-Equilibrium in Superheated Injector Flows”

Dr Gopalakrishnan’s area of specialization is non-equilibrium phase change operations. The basic phase change he studied for his dissertation involves the change of liquid fuel into gas vapor in automobile and aircraft fuel. There are a whole ton of things that need to happen in order for a fuel to provide adequate power to an engine so that a car or plane can travel, and a fair number of things that can go wrong in the attempt, such as flash boiling and vapor lock. The engineers know all about these problems, but I had to do a bit of research. A liquid boils, for instance, not only as a function of temperature, but also as a function of pressure. Suppose one thought of a linguistic flash boil as the interaction of

    a) a word’s definition (its ‘temperature’) and

    b) the context in which the word is uttered (the environmental ‘pressure’).

Right word, right context: everybody happy.
Right word, wrong context: problem!
Wrong word, right context: just a goof.
Wrong word, wrong context:
potential domestic disturbance or international incident!

Suppose we were able to slow down social interaction to 2000 frames per second (like this water droplet) in order to perceive how a single word enters language (and thus communication) as a whole?  Most people tend not to think much about the language we use unless/until something goes wrong, and then our energies focus upon repair. If we could cultivate more consciousness about how (for instance) individual word choices merge with larger pools of language use, then we might be able to diagnose discourse patterns and even design ways of communicating that work more efficiently in developing and implementing ideas that solve real-world problems.

In terms of the analogy I’m proposing here, Snapshot 2009-11-17 18-22-14how or when do words conserve mass and momentum without changing the substance or direction of established discourse or social patterns?  When and how might particular words conform to the dictates of conservation while also accomplishing an alteration in substantive conditions that generates new forms of dialogue?

Vapor lock is not such a problem for cars anymore, but it remains a challenge for aircraft. Both issues involve the liquid becoming gas too soon. With flash boiling, part of the liquid fuel – but not all of it – superheats, leading to a two-phase (and thus inefficient) distribution of energy. With vapor lock, the bulk of the liquid vaporizes before practical use – also due to combinations of pressure and temperature. Vapor lock can cause a severe drop or even a complete stall in power. Not what you want to happen at high altitude! Nor in a conversation that you wish to proceed smoothly, for whatever reasons.

Suppose you need to talk with someone who uses a different language than you. A phase change is necessary for communication to occur. Suppose an interpreter (professionally trained, fluent in both languages) is available to transform the ‘fuel’ provided by your language into ‘power’ in the other language? This would be a phase change, yes? Keep in mind that in scientific categorization, liquids and gasses are both fluids – they belong to the same medium. Similarly, English and Turkish, Spanish and Hindi, Malaysian Sign Language and Langue des Signes Française are all examples of the medium of language. The question of efficiency in fluid phase change is comparable to the question of comprehension in interpretation: the challenge is to identify the relevant factors and manipulate the conditions so that the interaction occurs with the least loss. In fluid heat exchange, one considers the

  1. rate of downstream atomization, the
  2. starting point of the phase change – its location within the nozzle, the
  3. extent to which dispersion continues outside of the nozzle, the
  4. endpoint of phase change, and (finally) the
  5. overall emission characteristics: a comprehensive image, if you will, of what is happening when, where, and how that involves all interacting elements and environmental conditions.


One can surmise that in addition to the environmental conditions of temperature and pressure, timing is crucial for effective fluid dynamic engineering! Time comes first in the list above (rate), requiring us to imagine the complicated system in four dimensions. Temporality is also one of the more obvious constituents of interpretation, as people using interpreters to communicate across language differences often express concern with the amount of time required for the interpreter to process the ‘injection’ before manifesting ‘emissions’. In aircraft, the particular mechanism that Dr Gopalakrishnan studied involved using the fuel system itself “as a heat sink to increase engine performance.”

Paralleling the practical application of a heat sink with interpretation, the question of efficiency involves the extent to which an interpreter dissipates the hot air, absorbing or otherwise deflecting excess energy that distorts the equilibrium of the relational exchange. This cooling effect of the interpreter is not intended to minimize an interlocutor’s intended meaning (a common concern), but rather, to enable the potential energy (one could say, the understanding) to be most efficiently utilized in whatever power application (voice - Blommaert: ‘the capacity for semiotic mobility’ (p. 69)) is called for: a sudden increase in speed (e.g., for emphasis), or a gradual drop in tone (perhaps to shift a debate from argumentation to persuasion).

Dr Gopalakrishnan’s work zeroed in (among other things) on the relationship between pressure and enthalpy. In terms of vaporization, enthalpy is “the energy required to transform a given quantity of a substance into a gas.” For some reason (unknown), the energy required by interpreters to transform language through a similar phase change operation seems expected not to change the substance. Liquid should not become gas! (Despite that they are still both fluids.) Put another way, the diction (discrete word choice) seems expected not to change despite the phase shift from one language to another!  This is akin to expecting, with fuel, that the molecules of the resulting gas would remain exactly the same as the molecules of the original liquid: in which case, no energy would be produced at all, as there would have been no reaction.

Based on everyday experience, language “is incompressible” (as Dr Schmidt teased when I posed my analogy to him), yet – ironically? – there seems to be widespread social conditioning about languages that presumes an interpreter is magically able to perform phase changes (interpreting from one type of language/medium to another type of language/medium) without effects from environmental conditions. Occupational health and safety evaluations, not to mention professional lore and training, reveal that communicators in a cross-language interaction do need to consider

a) the capacity of the interpreter to store extra heat/energy (technically, thermal inertia) generated by interlocutors and

b) the potential for long-term damage to interpreters (and thus, the communication system) by constraints imposed by conditions of ‘social temperature’ and ‘social pressure’ (which can show up, in fluid dynamic terms, as cavitation).

Often, when the complex realities of language-to-language interpretation are surfaced, the fallback position is to eliminate the need for interpretation. “Get everyone using the same language.” Instead, I want to suggest that there are tremendous benefits to embracing the need for interpretation as an opportunity for highlighting precisely those areas and moments of greatest difference and thus of challenge. When communication appears to fail or feels inadequate, this can be taken as an indicator to those involved that the interaction potential has shifted from a single/shared perspective to a fuller range of views – which, if utilized, may suggest greater/deeper capacities and efficiencies.

One of Dr Gopalakrishnan’s innovations was to apply two different sets of equations to the problem of fuel injection efficiency. Shiva'sLISA2By coupling mechanisms that perform distinct tasks in different domains, Dr Gopalakrishnan was able to generate new knowledge about the overall process which will likely lead to improvements in efficiency. In a similar spirit, I seek to draw on (admittedly limited) paradigmatic knowledge from engineering about fluids with paradigmatic knowledge from the humanities about language. This task necessarily involves translation between the two disciplinary languages. To be successful, co-learners will have to want to make the effort to move beyond disciplinary monolingualism. I hope the compelling problems of our time provide sufficient motivation for trying to bridge the segregation.

In a way, interpreters are always trying to apply “two different sets of equations” to the problem of efficient communication. These are the ‘equations’ of culture and language particular to each communicator. The unique aspect of interpreting (as a complex system involving the rapid combination of distinct tasks across domains with an ever-changing mix of elements), is that the people involved also have power to interpret – and re-interpret – the conditions. Unlike fluid dynamics, where the ‘temperature’ and ‘pressure’ are given factors of the environment (fixed, stable, presumedly controlled/controllable), individuals in a communication process can always choose to maintain or change the context: to alleviate or increase the pressure, to drop or raise the temperature, to decide that any word – ‘right’ or ‘wrong,’ even if it generates vapor lock or superheating – can be worked with and turned to productive use. This takes effort, of course, and requires collaboration – therein lies the rub!

Coming up in Part 2: the challenge to traditional models of superheating fluids that only consider instability-based modes of breakup, the question of size vs quantity, and void fractions.

Coming soon: Ambarish Karmalkar and Arturo Osorio

Dr Linus Nyiwul, Resource Management
working the system: market enforcement of emission standards

Dr Siny Joseph, Resource Management
How COOL is your seafood?

Dr Anuj Pradhan, Human Performance Laboratory,
Department of Mechanical and Industrial Engineering

Anuj in a suit
(on Risk Prevention and Awareness Training for young/new drivers)

Resource Economics
Stockbridge 217, UMass

Dr Linus Nyiwul’s dissertation defense was conducted almost exclusively in the language of math, with very little generic English explanation for the non-resource management layperson. So I cannot write very much about it, except that it was obvious that his faculty members are excited about the potential of this framework Dr Nyiwul has created for government regulators to exploit market mechanisms by leveraging emissions standards against the needs of firms to attract investors.
There are a couple of premises that Dr Nyiwul builds upon, including a perception that investors would prefer to put their money into “green” companies, and evidence that companies who improve their own environmental management systems experience increases in stock value (e.g., Feldman 1996). Dr Nyiwul described a whole lot of complicated stuff that needs to be properly balanced:

  • setting a standard,
  • needing to monitor to ensure companies are meeting the standard,
  • keeping the cost of monitoring low enough to be reasonable (for government) while
  • making the threat of monitoring real enough that companies prefer to comply rather than risk being caught and having to pay the penalty.

LinusGRAPH.jpgSomehow all those things get crunched through some equations that calculate

  1. “marginal damage” (whatever this means! it apparently refers wholistically to “society”) and
  2. monitoring costs (to the government) and
  3. costs of compliance (for the firms)

…. now, where it gets real interesting is when the government establishes two emissions standards: a regular standard (the minimum to be deemed “in compliance” and avoid penalties) and an overcompliance standard – which would earn a special certification proving uber-greenness (or something en route to such glorified status). There is pilot project currently underway, the National Environmental Performance Track (NEPT), which has weaknesses but whose results – plugged into Dr Nyiwul’s equations – demonstrates that TWO STANDARDS IS GOOD POLICY! Not to mention that firms which earn the overcompliance certification have a special marketing asset to appeal to investors. (They have to meet the minimum “regular” standard first, then apply and demonstrate accomplishment of the overcompliance standard.)

There was some fancy problem-framing, as Linus described one finding, saying that it came about in one way if you set the problem up this way, and comes about in another way if you set the problem up that way. (I love the fact that subjectivity can be found in math!) There are some issues with firms getting to self-report emissions (apparently without verification, unless the regulator goes to conduct the actual monitoring?) And there was quite a discussion about looking at the problem endogamously: with free entry into and out of the market. And output and size effects really matter (but cannot be reversed) in terms of the direct and indirect effects of enforcement costs. Yea, I don’t really know what those sentences mean in “real” economic terms, but there may be other things in play at times which can lead to inconclusive results.
but…. drumroll please! Dr Linus Nyiwul concludes, and his faculty agree:

“An optimal tax rate is smaller than the social marginal damage for a fixed n and no market imperfections.”

The challenges that issue forth from Dr Nyiwul’s work include (in no particular order):


  1. identifying which are the important uncertainties (given that anything could be uncertain except for whatever is under direct regulatory monitoring)
  2. defining clearly what “overcompliance” means (if “compliance” means paying the right tax, i.e., reducing emissions in order to minimize tax…. does overcompliance move a firm into a “credit” situation?)
  3. how to extend the framework from a single firm to an industry
  4. identifying how the framework as it is fits within known policy issues and concerns, and
  5. extending the frame beyond emissions to look at a lot of other policy issues.
Resource Economics
UMass, Amherst

For her final oral examination for a Ph.D in Resource Economics, Siny Joseph presented an analysis of Country of Origin Labeling (COOL) for seafood. I echo the words of the external member of her committee, who said,

“After reading this paper, I pay more attention to my seafood.”

Dr Siny Joseph’s field is I.O. Economics – a term that I had to Google after the defense! My complete ignorance of the jargon in this field should alert you to the high probability that I have misconstrued or misunderstood major elements of her work. I will do my best to summarize and hope for correcting comments as needed.

Extrapolating from the wikipedia entry and my limited exposure to other disciplines, Industrial Organization explores the economic interaction between two dynamic forces:

  1. the strategic behavior of firms (which I believe is the purview of my friends specializing in strategic management) and
  2. the structures of markets (statistical analysis like I’ve never seen!)

Given my lowest-score-in-the-cohort competence in all things math, most of the substance of Siny’s analysis and discussion with her Committee Members occurred in a language I cannot even pretend to understand: replete with “k-bars,” and K’s with subscript L’s and H’s, “thetas” and fixed parameter values composing profit maximization formulas… Go grrl go! Her findings, however, were described in comprehensible English – and they are fascinating.
Siny answering a question.jpg
Seventy percent of seafood purchased by consumers in the U.S. is imported; of these imports, 80% comes from less developed countries. COOL (Country of Origin Labeling) is legislation introduced in the 2002 Farm Bill, and implemented with seafood in 2005, with the idea that food quality and food safety are linked with where the food originates. Coincidentally, COOL is being extended to more foods this year with continuing debate over exemptions and on-going criticism of delays, making Dr Joseph’s research findings immediately relevant. Regarding seafood, huge sectors are exempt: restaurants and other food service providers, specifically, and products deemed to be “processed.” In general, then, COOL applies to the seafood you buy in a grocery store or market to cook at home.
It seems the first major task in an I.O. economic analysis is to define the boundary between what is included and what is excluded from the study. Siny focused on the US market, presumably because the boundaries could be readily established. (In a case study on shrimp, she explained the distinction between a “covered” and “uncovered” market, explaining she’d had to go with the former – specifically an undifferentiated market – because the mathematical expressions for the latter were unmanageable. Basically (I think!) this means using idealized equations rather than ones more representative of real life.) Generally, Americans will assume that seafood of domestic origin is of higher quality than seafood of foreign origin, and consumers are most willing to pay the costs of labeling during and immediately after food scares – so that they (we, smile) can make (at least) this basic differentiation.
But (I kept thinking to myself) – labeling after a scare doesn’t do much to protect consumers during the scare and of course has no contribution to risk prevention whatsoever. So why isn’t labeling just done, as a matter of business habit? “Because,” Dr Joseph explained, “firms can masquerade low quality seafood as high quality when consumers don’t have all the information, and that’s where the profit comes from.” She and her committee members debated nuances of the statistical measurements, recommending and justifying choices of particular statistical tools, but did not question Siny’s basic finding that (now, with only three years of info available) the greatest profit comes under what’s called “voluntary COOL” (which does occur with some seafood products), followed by partial implementation of COOL (the status quo), and drops the lowest under “total COOL” – an ideal she recommends because “real consumption is greatest when there is full implementation of COOL.”
The rub for me during the whole presentation is the use of this indicator called WTP: Willingness to Pay. What I’d like to see is a complementary WTP2 (squared) equation: Willingness to Profit. Somehow the whole debate seems framed with WTP2 as an unquestionable given – companies have the inalienable right to maximize profit and consumers have to pay for safety. It just strikes me as wrong; at least out-of-balance. Firms can afford to pay much more than any individual can! Anyway, Siny’s Committee engaged vigorously with her findings: “I like the story you’re trying to tell,” said a professor by speakerphone, wondering about pursuing the angle of diversion, and all of them wondering about policy recommendations based on these findings.
There was a measure of “Total Welfare” that supposedly mixes the best consumer outcome with the best business outcome…. and Dr Joseph did present some evidence that companies would label voluntarily under certain/specific conditions (of known/demonstrated consumer demand?), but for the most part companies are trying to duck this completely. For instance, shrimp traders are required to label unprocessed shrimp, so they would rather do something that qualifies as “processing” in order to avoid labeling. Doesn’t it cost to do that, too? Honest – I get very confused! Why is one type of cost preferable to another? I think someone needs to institute an equation such that consumer WTP cannot exceed 1/2 the square root of the actual incurred cost apportioned over the entire volume in order to somehow link a decrease in the firm’s WTP2 (willingness to profit) with the increase consumers are willing to pay. (Which is probably why I’m not an economist.)
Siny's graph.jpg
Nonetheless, even if the current data is not totally amenable to a single clear and concise argumentative point, I definitely agree with Siny’s committee member: “I like your plan of attack.” I want to be able to argue convincingly that the government (through legislation) should be on the consumer’s side – not only in the grocery store, but I would also like to be able to confirm the quality of seafood purchased in restaurants.
Keep it up, Dr Siny Joseph!

Industrial Organization, Wikipedia
Market coverage strategy,
Amherst, MA

Triple Points for anyone not present – and an equitable consolation prize!

quiz time.jpg
Only four sets of feet open this quiz…it was not a twelve pillow night, although there were more than a few direct hits!
The Innocent One displayed her growth by leaps and bounds. The (nearly always) Late One had his first shock when he saw that the jar was empty: no driving until that sucker was caught! (Not to be confused with the fictional movie, Man in a Car, although a conflation of Man&Snake in a Car might make decent competition with Snakes on a Plane.)
Warning: tea sharing customs vary, bhel.jpgas does etiquette for surprise birthday parties. Age protects one not from the practical joke, but it sure helps the food preparation!

something special.jpg

“Everything vibrates at really low frequencies.” Huh?

Personal favorite: “Someone called the lab and asked for my partner and I said he wasn’t here. ‘There’s another guy,’ he said, ‘but I can’t pronounce his name.‘” (Me either.)

“Let’s not talk about ‘we’ at this point.”


  • Five points each to the first person who correctly identifies all four sets of feet, and both pictured dishes, in order.
  • One point each to the first person who answers the following questions.
  • Five points for each speaker identified in any/all included references.
  • Five points for each explanation of context for any/all included references.
  • All responses must be posted as ‘comments’ to this post.
  • No responses will be revealed for at least 24 hours from email notification.
  • Points will be tallied and posted as a comment within 48 hours from the original email notification.
  • The winner(s) will receive a home-cooked meal from yours truly.

Ready, Set, Go!

  1. Who was even later than me and my erstwhile hosts to the famed Mumbai wedding?
  2. Who’s snores might bring down the house?
  3. Which First Lady is shopping for a dog as spouse of the President of the Indian Student Association?
  4. Whose birthday was it?
  5. Who and what was the issue with that shirt’s cut in the back?
  6. Does someone really eat like a camel?
  7. Who is the perfect stand-in for a working-class driver (in any country)?
  8. Visa? Who needs a visa?
Underwater handshakes, Reflexivity
Human Performance Laboratory
Department of Mechanical and Industrial Engineering
E-Lab II
University of Massachusetts

46 glance points.jpg

The forty-six “glance points” represented in this graph illustrate eye gaze tracking during driving. (Now!) Dr Anuj Pradhan has been crucial in co-developing the RAPT novice driver training in risk perception over the course of a six-year doctorate degree and four experiments. Risk Perception and Awareness Training combines simulation and field techniques for assessing new drivers’ scope and skill in anticipating potential risks while driving.
Did you know?

  • Car accidents are the leading cause of death for teens in US
  • Teenagers, during the first six months of driving, have an eightfold increase in the risk of dying in a car crash
  • Teenagers, in general, are four times more likely than older drivers to die in a car crash
  • In numbers: teenagers are involved in 4.7% of the six million crashes annually in the US but compose 13% of the fatalities

Previous research has identified three main causes of teenage accidents, including failure to adjust speed appropriately to conditions (20.8%), failure to maintain attention to the task (23%), and – the biggest – failure to conduct an appropriate search of the driving environment (42.7%).
After his presentation, Dr Pradhan’s Dissertation Committee gave him some grief about the distinction he wants to draw between “tactical scanning” and “strategic scanning.” (They also asked him, right at the beginning, to take off his suit jacket and relax. This may have been the signal that they planned to heat up the room…!) The first question, however, came from one of the faculty during the presentation, and it involved clarifying the dependent variable of eye movement. Dr. Pradhan’s first experiment established a correlation between the recognition of risk (seeing it) and the knowledge that risks may be present (use of eye gaze to scan in order to identify (i.e. see) them if they are present).
Two more experiments refined the technique for linking eye movement with perception and recognition of risk. Results from the three experiments indicate improvements in visual search behavior in all driving situations, from the benign – when no risks are present, to situations with a minimal possibility of risk, and on up to situations with obvious dangers.
In other words, the students and volunteer test subjects who participated in these experiments learned about the strategic need for constant maintenance of visual attention across the broad driving environment which might require the driver (i.e., me – or you!) to engage in specific tactical behaviors in order to reduce risk – or be able to implement evasive action should a risk materialize because one has seen it in time! My contribution came with the fourth experiment, I got to test out the version in development – my experience (as an “older driver,” grin) may or may not have aided in refining the program, but it certainly reinforced for me that there is a purpose to where, when, and why I look and watch in the ways that I do while driving. (I learned that I could still do better!)
The need for this kind of training tool in driver’s education programs everywhere is immediately and obviously apparent. I was also fascinated by the application of temporal and spatial algorithms to the eye movements captured by the Mobile Eye movement tracker. Time and space coordinates for every eye movement had to be combined and crossreferenced in a Fixation Identification Algorithm with prior and subsequent eye movements in order to define a glance. These glances are then superimposed on the objects in the driver’s visual range, and categorized as on-road or off-road. In this way, the Mobile Eye Tracker pinpoints whether the driver’s eye looked directly at the truck parked on the side of the road in front of a passenger crosswalk, when (from near or far), and for how long. Does the gaze return or simply pass on to other objects?
In other words, the direction of eye gaze can indicate the driver’s perception of risk – or lack of it. Once a driver is informed of their own eye movement behavior, then their awareness of risk is enhanced (or should be, I think the larger research program of the Human Performance Lab is lacking a necessary qualitative element). In fact, after training in the tactics of using visual scanning to perceive the possibility of risk, Dr. Pradhan shows that drivers improve risk awareness in four significant ways:

  1. Trained drivers maintain a wider horizontal range of vision
  2. Trained drivers shift half their glances offroad, more trained looking to right – where more risks presumedly originate (compared with the untrained who look left & right more-or-less evenly)
  3. Trained drivers glance off-road for slightly longer times (presumedly considering the extent to which the conditions in sight compose/obscure a risk or not)
  4. Trained drivers learn not only to transfer recognition of risk types between similar scenarios, but also transfer the skill of tactical scanning to different scenarios than those they were exposed to during training

Throughout the presentation, I kept thinking, “if only” – if only I had had this knowledge five years ago — the language of “visual scanning,” “risk perception,” and “risk awareness” — then Hunju’s driving practice might have gone more smoothly for both of us!
Anyway, Anuj’s defense rolled along. Dr Krishnamurty pressed him on the relevance or distinction between top-down and perspective views, which Dr. Pradhan handled with aplomb: “I got you, excellent answer.” No wonder Jeff calls Anuj, “my Yoda.” The (self-named) Curmudgeon wouldn’t let go of the tactical/strategic distinction but I wager this is merely ground for the next stage of hypothesis testing and theory building. The Committee Chair, Dr Fisher, supported Anuj throughout. They grilled him for a mere quarter of an hour after kicking out us observers (selected members of the fan club). And then they only made him wait for about that much longer (or less) before Dr Fisher came out and ushered him back in with a handshake and announcement:

You’re done!”


The Younger Driver: Risk Awareness and Perception Training, Human Performance Laboratory, UMASS Amherst
Using Eye Movements To Evaluate Effects of Driver Age on
Risk Perception in a Driving Simulator
by Anuj Kumar Pradhan and five others
glance, Merriam-Webster Online Dictionary
Fixation-identification in dynamic scenes: comparing an automated algorithm to manual coding, Proceedings of the 5th symposium on Applied perception in graphics and visualization
Driver’s License, Reflexivity
Conference: Aptitude for Interpreting

Imagine my surprise upon entering the lobby at Lessius University and witnessing a conversation in American Sign Language! My brain has been so otherwise-occupied that it never once crossed my mind that

    a) anyone other than European spoken language trainers/researchers would attend or that

    b) I might actually know people!

It was absolutely delightful to re-encounter respected colleagues, meet some of the luminaries whose work is required reading, and make new friends (although one always wonders whether they’ll claim me, and/or for how long!) ;-)

We started quite seriously, with the keynoter, Mariachiara, setting the context with a superb history of the tension between innate talent and built skill. Are interpreters born or made? Perhaps it is a both/and kind of question, with challenges of re-molding/re-training those with “the aptitude to perform” and fresh cultivation of those with “the aptitude to learn.”

At the end of the day, Miriam reflected that we (interpreter researchers) have learned that we’re asking the right questions, but we don’t seem any closer to clear answers! One needs only hark back to the presentations of Her Majesty of No Results and the Princess of No Significance to find evidence supporting Miriam’s perception. Are we guilty of trying to turn a sow’s ear into a silk purse?

“You’re argumentative!” one of my dinnermates proclaimed, as I sought to champion a shadowing task based on the persuasive argumentation of the aforementioned Queen.

Ignore that interpreter in the corner!

I don’t want to be accused of breaking the pinkie pact (especially since I wasn’t at the presenter’s dinner the night before when they apparently made a rule not to ask each other hard questions), but . . . aren’t the hard questions the ones that most need to be asked?!

“You’re against essentialism in all forms!” Miriam bought me a coffee. :-)
(I think this means we are now bonded for life.) Franz invited me to come after him hard….which I did but it wasn’t easy going. First he thought I was arguing that “everything is cognition,” which he agreed is a way that knowledge in the field can be understood. It took some fancy footwork to get across the idea that what I am critiquing is the way that we (interpreters, interpreter trainers, interpreting researchers) collude in assuming that everything in the field can be broken down into nice, neat, discrete boxes. Miriam rephrased this as the human propensity to put everything in categories.
“It’s interesting, but I don’t agree with half of it!” (Shhhsh that interpreter in the corner!)

“Why does your badge say ‘Belgium’ but you are speaking English?” Heidi was trying to process where I was from and why I was delinquent in signing up for the conference dinner. Really, I’m here under cover . . . just as there are “slides no wants to see” (recall the pinkie promise), there are also “some matters untouched” (Cronbach and Snow 1977:6).
“Is this rubbish?” (Get ready, I’m gonna be asking you, Chris!) Meanwhile, Amalija has two weeks to devise the perfect comprehensive provable aptitude test for her incoming screening. She has the power! As Sarka explained,

“some of these people want to be translating Shakespeare’s sonnets, they don’t want anything to do with other people!”

One of the huge dilemmas in interpreter training is predicting when a potential interpreting student might succeed against the evidence that convinces us they won’t, and how to justify the investment of resources when even those students with all the promising signs turn out unable in the end.

There are no future facts.” (Robert S Brumbaugh, 1966)

What can we learn from the ones who had it made?

It is as if we all contain a multitude of characters and patterns of behavior, and these characters and patterns are bidden by cues we don’t even hear. They take center stage in consciousness and decision-making in ways we can’t even fathom.

The East-West debate came up: does one interpret only into one’s mother tongue, or from a mother tongue into another fluent language? Why, I wonder, are people so invested in this directionality? Meanwhile, the non-sign repetition task of nonsense biological motion that Chris reported seems an awful lot like shadowing to me…. and can I just mention how cool it is to attend a conference with five active languages, three of which are signed?! I am not able to articulate the significance of increases in visual memory, but it caught my attention…advanced interpreters can apparently correctly select geometric shapes after a delay more rapidly than beginning interpreters. Perhaps this is related to what I’ve noticed in my own neural net, specifically the new capacity to learn math after twenty years of signing.
Brooke had the two best slides so far, understating the case for the performance of simultaneous interpretation: “we have a lot to do.” (Can I get copies? Beg beg beg!) I’m especially intrigued by the risk/avoidance measures….just a few days ago I came up with the title for my next conference proposal: “Risk, Resignation, and Loss: Interlocutors on Interpretation in the European Parliament.” (Next week I present some of the results at a conference on Mikhail Bakhtin in Stockholm).
I love the metaphor of the airplane and its engines. Sarka and Heidi get credit for this one together, right? There are the pair (or more) of wing engines that are all about cruising, and then there’s the solo job in the tail, which is all about getting up to altitude. Sherry might win the prize for getting the earliest start, although there is a four year discrepancy concerning the age at which she began interpreting: four? Eight? Then you’ve got peeps like me who didn’t even start learning a second language until 28! Anyway, I am pleased to go along with the decisions that “all of us made” in Sherry’s “we”, particularly the one about merging modalities. The two tests she shared intrigue me: the CNS Vital Signs and the Achievement Motivation Inventory.
I hope no one throws a wobbly because of anything I’ve written here. I was duly warned that someone would have my guts for garters if I transgressed too far. Might I ask, instead, for a soft word on the side and the chance to edit? :-)

online discussion forum

Language is a force.
Language names, and by naming, it calls into being. This is how social reality is constructed and maintained. I think it is an effect of quantum mechanics, but smarter minds than mine are needed to make the connections in a compelling scientific manner.
Last fall I wrote a post on some dynamics of dialogue and discourse, in which I engaged with ideas of a discursive psychologist, Michel Billig.

The core of the argument laid out by Michael Billig (in the articles from Discourse and Society 2008, Vol. 19, Issue 6) is that we who think in terms of critical discourse analysis (CDA) need to be acutely aware of our own uses of language, lest we repeat some of the very elements of language use that we critique in others. Billig’s concern is with social scientific language in general; he selects CDA for heuristic and practical purposes: “It should be a major issue for analysts who stress the pivotal role of language in the reproduction of ideology, inequality and power” (p. 784).

In particular, Billig goes after the academic/theoretical use of nominalization, which is a shorthand way of condensing a particular dynamical concept (something with a lot of parts) into a single term. Debate over costs and benefits of using nominalization seem to swing on the temporal grounding of interlocutors. I’m thinking at the mundane level as well as at level of ideological reproduction. For instance, does saying something about (i.e., naming) tensions in a friendship necessarily make them worse or can it provide a means to shift footings? At the precise moment of making the utterance, there may be a spike in bad feelings – all that tension concentrated and released in the acts of speaking and hearing. But I think that it is what comes next (at least, so I hope) that becomes determinative for the subsequent unfolding. When nominalization is at play, Billig argues there is a tendency to depersonalize behavior or action such that individual contributions to whatever unfolds are lost to perception. So the pattern of tensions enacted when one or another party to the tension actually says something directly about the presence or evidence of tension becomes bigger than the minute social interactions that compose it. The pattern itself becomes “the thing”, and individuals are simply swept up in it, all agency erased.
The question is, when things are not going the way one wishes, what next? I watched an interesting video on the synthesis of happiness this morning (20 minutes long) which argues that if we assume irretrievability, then we enhance our capacity to choose happiness. I’m wondering if this basic precept – that’s what done is done and can’t be changed – could guide many other choices, including the ways we respond when we find ourselves seemingly trapped in a discourse that we don’t necessarily want. I believe it is the element of acknowledgment that I am finding most attractive. Perhaps my general communicative strategy is to reduce uncertainty (see What You Don’t Know Makes You Nervous) in order to make choices clear.

“Are you speaking English?” asked the marine biologist. (I get that a lot.) NGO told me about dynamic semiotics while The Woman from Ghent provided commentary on the group’s unique social interaction – not to mention demonstrating the lesbian walk. Several times! Meanwhile, Irish informed me she’s “not really a tight bitch.” (I didn’t know that I was wondering!) ;-)
The length of my stay, age, and relationship status was determined (and double-checked), not to mention how I knew who. I was spared “change the subject” moments since none of my ex’s are known to this community. :-) The night was divided quite evenly between laughter and dancing.
Yes, the work switch was definitely turned off – how else could I have arrived to my hosts’ place at 4:15 thinking it was just a bit past midnight?!

Page 5 of 53« First...34567...102030...Last »