By F. Louis Floyd, Special to CoatingsTech

I read with interest the Xperience article, “Facilitating Coatings Product Development with Artificial Intelligence,” that appeared in the August 2021 issue of CoatingsTech.

I recognize that artificial intelligence (AI) is useful for designing polymers, which have only a few component variables. In fact, I have been using such simple modeling efforts during my 40-year career long before AI became a named activity.

The August article presented a very aspirational vision of AI. However, my experience with finished coatings that contain approximately 20 ingredients is that AI has not been particularly helpful in formulating finished coatings products. So, I want to express a voice of caution in what we can expect from AI based on only a few simple polymer compositions.

While the future of AI is not yet written, the field faces significant challenges.

For example, unlike polymers, coatings have approximately 20 ingredients, each varying in quantity to achieve the desired balance of properties, that itself varies among manufacturers because of differing interpretations of what the market desires. The possible combinations can yield well over a million possibilities, plus interactions, so the complexity is exponentially greater for coatings than for simple resins, and resulting tradeoffs are not even considered in typical AI incarnations that I’ve seen.

Of even greater concern is that instead of smooth response surfaces in design space, coatings have contorted surfaces, with numerous discontinuities such as phase changes. And nothing in AI so far is capable with dealing with discontinuities. This is my largest objection to expecting AI to automate the formulation process for complex materials such as finished coatings.

The work that Pam Kuschnir and Richard Eley (from Glidden) did on waterborne coatings in the 1980s (see for example, JCT, 89(744), January 1987, pp. 75-87; “Control of Foaming in Water-Borne Coatings,” by Kuschnir, Eley, and Floyd) demonstrated that point well. Formulators were operating near a phase boundary, routinely wandering back and forth across that boundary without even knowing that it existed. The result was chaos for them in both production and use. The specific example was that of persistent foaming which did not respond to conventional defoaming chemistries in the vicinity of phase boundaries (i.e., a discontinuity). Kuschnir et al. clarified this point in their work, thereby gently guiding formulators away from that critical boundary zone, which resolved their problems (see final section of paper “formulation recommendations”). This was just one example of the discontinuities in behavior that exist in the real design space of typical coatings.

Another example of problems when passing through discontinuities is that formulators experience a viscosity maximum which retards equilibration for months, while processing occurs in a single day. Thus, while initial testing looks good, and heat stability testing might look fine (by temporarily reducing the viscosity maximum), RT equilibration was greatly delayed, manifesting as viscosity drift. And no amount of surfactant or process work could solve this because it was instead a resin-dilution-curve pathway issue, not just a compositional one. Similar effects have since been observed with solventborne alkyds as well.

My skepticism is bolstered by recent articles in the Wall Street Journal (WSJ) and elsewhere that report on some of the shortcomings of AI.

David A. Shaywitz, author of “When Machines Miss the Point,” WSJ, October 26, 2020, reviewed the book The Alignment Problem by Brian Christian. He wrote that the disconnect between intention and results defines the essence of the alignment problem—the difference between the purpose put into the machine versus the purpose we really desire. Sophisticated algorithms can do everything they are supposed to do, performing wonders, and still make bad recommendations and dodgy claims. Programmers often fail to recognize—much less seriously consider—the shortcomings of their models. So, the problem is with us and our models, not the algorithm. This is a timely reminder that even in our age of big data and deep learning, there will always be more things in heaven and earth than are dreamt of in our algorithms.

Christopher Mims, author of “Should Artificial Intelligence Copy the Brain?,” WSJ, August 4, 2018, argues that the biggest breakthrough in AI—deep learning—has hit a wall, and a debate is raging about how to get to the next level. Because artificial networks don’t know things about the real world that a truly intelligent creature does, they are brittle and easily confused. In one case, researchers were able to dupe a popular image-recognition algorithm by altering just a single pixel. Dr. Gary Markus, former head of the Uber Technologies AI Division and currently a New York University professor, posits that deep learning is woefully insufficient for accomplishing the sorts of things that we have been promised. Until we figure out how to make our AI systems more intelligent and robust, we are going to have to hand-code into them a great deal of existing human knowledge. That means that the intelligence in artificial intelligence systems is not artificial at all.

John McCormick wrote an article, “Startups Use AI to Analyze Risk from Climate Change,” WSJ, August 9, 2021. The scary part of this story is that the algorithms used to compute climate risk for companies are trained on long-range global climate models. That takes on the form of a tautology: restating the same thing in another form. Thus, one model is based on another one, and then rejoices in replicating the “results” of the predecessor model. Because no real data are used, the AI systems being introduced are proving to be ineffective in helping to predict future extreme weather events, and thus the risks that companies will experience.

Naomi Oreskes, author of “Scientists: Please Speak Plainly,” Scientific American, October 2021, notes that even the language used by “experts” can be misleading. She wrote, “Computational scientists declare a model ‘validated’ when they mean that it has been tested against a data set—not necessarily that it is valid. In AI, there is machine ‘intelligence’ that isn’t intelligence at all but something more like ‘machine capability.’ ”

In an article, “Synthetic Data Used to Round out AI,” WSJ, July 24, 2021, Sara Costellanos describes the troubling pattern of companies and hospitals that are using “synthetic” data to fill in the gaps of existing data that are then used to train their AI systems, which are then used to make decisions. Lacking adequate real data to test/train their AI systems, they’re “estimating” additional data to fill in the gaps. Credit card companies are using this technique to generate fake fraud patterns to bolster their inadequate real training data for their fraud-detection systems. The scary part is hospitals using “synthetic” data to train their AI systems that are in turn used to make medical decisions. In my view, this violates the implied first commandment of model building: thou shalt not use made-up data (in any form) to validate hypothetical models that are attempting to describe reality.

Kate Crawford Sr., principal researcher at Microsoft and a professor at the University of Southern California Annenberg School for Communications, wrote in an article, “Rein In the Robots,” Time Magazine, August 23, 2021, “Too many policymakers fall into the trap of what has been labeled ‘enchanted determinism,’ which is the belief that AI systems are both magical and superhuman—beyond what we can understand or regulate, yet deterministic enough to be relied upon to make predictions about life-changing decisions. This effect drives a kind of techno-optimism that can directly endanger people’s lives.” I would add that such behavior can also endanger the credibility of the technical arms of companies, and possibly even their survivability. Think about estimating service lives of new products without actual proof of concept. Boeing, with its 737 Max problems, illustrates this point rather clearly.

I want to make sure we don’t overpromise the capabilities of AI now and where it is going in the near-term future. In most fields, AI has yet to accomplish even simple tasks so far. Experimental design work that dates back 50 years in resin design is challenging to replicate with AI techniques for formulating finished coatings.

While I support innovation and progress, we need to make sure we are not encouraging what may be unrealistic expectations from management in the coatings industry, to the detriment of their R&D functions now and in the future.

About the Author

F. Louis Floyd spent 35 years in industrial research and development at Rohm & Haas, Glidden, and Duron. For 25 years, he was a reviewer for the Journal of Coatings Technology (subsequently JCTR);  flfconsult@cox.net.

CoatingsTech November-December 2021 |  Vol. 18, no. 11