The Society of Automotive Engineers has good news: Paul Green has been ‘messing with the devil’ (in a good way), as Lou Fancher reports

If the devil is in the details of standard measurements and terminology used in reported research on driving performance up to now, it has been a force that encourages confusion, not clarity. Prior to the June 2016 publication by the Society of Automotive Engineers (SAE) of Paul Green’s groundbreaking Recommended Practice J2944, Driving Performance Measures and Statistics, there was—unbelievably, if not unbeknownst to everyone in the field of automotive mobility—no common language for measuring how people drive.

Driving practices, concepts and terms in technical standards, journal articles, proceedings papers, reports and presentations often had up to a dozen different names and defined those variations only 10-15 per cent of the time. Graduate students working to earn their PhDs, designers and engineers at automotive labs, people in government transportation departments, scholars and researchers relied on measures and statistics that were essentially apples-to-oranges comparisons.

“It was chaos,” says Green. “Anyone could use any word they wanted. No one was asking, “What do you mean?” Everyone had different ideas of what the words meant but there wasn’t a push to be rigorous.”

Dr. Paul A. Green is a research professor at the University of Michigan Transportation Research Institute (UMTRI) and in the University of Michigan Department of Industrial and Operations Engineering. Published in more than 300 journal articles, proceedings papers, and technical reports, he has been the lead author of landmark publications including the first set of US DOT telematics guidelines, SAE recommended practices concerning navigation system design (SAE J2364, the 15-second rule) and design compliance calculations (SAE J2365). At UMTRI, Green’s research focuses on driver distraction and workload, navigation-system design, motor-vehicle controls and displays, partially automated vehicles and related topics.


Recognizing the deficit and the long-term danger of vague, inconsistent language in published research, Green sought solutions. At an SAE meeting, he and co-author Dr. Daniel V. McGehee, Director of the National Advanced Driving Simulator and associate professor of mechanical and industrial engineering at the University of Iowa, encountered Tufts University graduate student Mark Savino. They suggested that Savino explore the lamentable quality of driver performance terms for a master’s thesis. Savino came back with 13 different names for “lane departure” and other verbatim examples. Shocked, armed with compelling statistical evidence of the problem and determined to identify and define important driving terms, Green, McGehee and retired Ford Motor Company human factors expert and writer Gary Rupp embarked on an eight-year marathon.

The resulting 171-page document—the final report after 300 drafts and 10 times the length of SAE’s average document, according to Green—defines over 50 terms. For consistency, a single name is assigned to each term, with multiple definitions designated as “option A, B, C,” and so on. SAE J2944 relies primarily on the AASHTO Green Book, the Highway Capacity Manual, and the Manual of Uniform Traffic Control Devices, among other foundational documents. It is supplemented by more than 300 references: SAE documents typically have approximately five references.

McGhee has worked with Green for over 25 years and says the UMTRI researcher is one of the most detail-oriented scientists in the field. “There is no-one else that could have pulled this document off,” he says. “Any person doing driver performance research – from government labs, industry or academia will use this standard. It will be the first place any reputable researcher will go, prior to designing a study or doing data coding.”

Green deflects attention to his leadership role, saying, “If anyone was important it was Gary Rupp. He is a master at getting the language right.”


SAE J2944 establishes consistent definitions for what have been broad, casually used terms that include “gap,” “headway,” “braking response time” and others. Green says that people sometimes use “headway” when referring to “gap,” and vice versa. Gap is the distance between a driver’s front bumper and the rear bumper of the vehicle ahead. Headway is measured as the distance from the driver’s front bumper to the front bumper of the lead vehicle. But for some driver simulator applications, headway is the distance between the vehicles’ centers of gravity. What happens if a researcher studying tractor trail trucks uses headway, but means gap? That’s a-55 foot error.  And wouldn’t studies comparing headway but using differing definitions result in useless data, especially if the measurements of headway aren’t correctly re-calibrated or a shared standard applied to the research? SAE J2944‘s six definitions of headway eliminates these errors.

Another illustration of why careful use of technical terms is a good thing: braking response time. “It’s a collection of measures, not just one,” says Green. “It could mean I’ve just lifted my foot off the accelerator; I just touched the brake pedal; the brake lights just came on; or that I’m braking 100 per cent.” The value differences are enormous: SAE J2944 applies specificity to the term.

Other examples? “Time to collision” is separated into two data pools resulting from the application of common, but different equations: the distance and difference in velocity between vehicle A and vehicle B, or data that factors in acceleration. “You get enormously different results depending on the equation used,” says Green. “One number can be double the other.” Similarly, “lane departures” is refined from a broad, generalized term to specific categories. Options include “about to depart,” “departing,” and “have departed.”

The problem of vague terms and statistics for on road experiments is compounded when driving simulator studies get into the mix. In many cases, the simulator numbers may not reflect the real world. Once published, a human factors person, researcher or graduate student might miss the discrepancy and structure a comparative on road study and then issue a report. Under the current situation (minus SAE J2944‘s exactness), Green says that unless a vehicle engineer or another person recognizes that a simulator study’s numbers are atypical and says, “No one drives like that,” the comparisons that result from mismatched studies might be published, but are useless. Additionally, simulator research held up to the light of typical numbers might be adjusted during the pilot phase, or an article based on a simulator study might be discovered to have no real road application and not be published at all. Ultimately, critical questions are answered by the document: How does a study compute a mean when one of the numbers is infinite? Do researchers throw away large numbers above a certain threshold? SAE J2944 tells a researcher how to process data.


SAE J2944’s impact is both immediate and long-term. Without the document, a researcher who today wants to use a term accurately could write a complex, explanatory paragraph—leaving room for perhaps only one reference. Some research publications are limited to five or six pages: pertinent findings might be sacrificed. But with J2944, a researcher can write, “I measure lane departure, option D,” and reference J2944. Because the document defines each term, identifies where it first appeared, references the key studies in which it was used, and provides statistical distributions and other source information, a short sentence replaces a long paragraph. One term, an option and “J2944” equals a complete description linked to multiple references.

Viewed on a broader spectrum, graduate students doing literature review for their research gain instant access to the most important studies for each driving term, measurement or practice. Researchers can compare studies accurately and design new studies that lead to improvements. “I can come up with a design, someone else can improve the safety of it in their study, and I can understand what they did,” says Green.

James Foley participated in developing J2944, contributing expertise from his 30-plus years’ experience leading automative human factors research teams at universities and automotive centers in the United States. Retiring in April 2016 from the Toyota Collaborative Safety Research Center and currently a consultant at CarProfConsulting, Foley writes in an email, “J2944 is an ambitious and critical document to advance automotive human factors and the supporting research. Paul and Gary Rupp spent untold and unpaid hours in researching and writing this document. The care and quality of the document is rarely found, even in other SAE standards.”

Foley says that dealing with the human factor in driving is difficult, and sometimes frustrating, as drivers are not homogenous. “J2944 provides guidance and clear definitions to aid researchers in accurately describing driver behavior and related variables. In the future, if all researchers use J2944 as the fundamental reference document, it will be easier to understand and interpret the results of different research on the same topic.”


Green says that with worldwide adoption as the goal, the document was submitted in draft form to colleagues in countries other than the United States. “It’s published only in English, but we were sensitive that it’s going to get global use. We obtained comments from non-native English speakers to insure that people developing vehicles and doing vehicle research in other countries could understand and use it.”

Adoption of the document is the next, crucial frontier to cross. Green says the SAE will advocate for conferences to update their author instructions to advise the use of SAE J2944. Crib sheets put out by simulator manufacturers might also begin to include the document’s terms and standards. “Eventually, the manufacturers will change their software to use the SAE names,” says Green. “And like I’m doing now at conferences, we can nag people. We can incentivize people to use the the terms by saying that eventually, if they don’t use the terms, their papers will be rejected.”

In the meantime, it’s no surprise that Green, his eye forever on the details, plans improvements—more data on the distributions of the terms defined and adding missing terms—to future iterations. “It will need updating,” he says, “if only to accurately reflect the latest research.”

Lou Fancher is a San Francisco Bay Area writer who specializes in technology, science and education