CoatingsTech Archives
Leveraging Big Data, Artificial Intelligence, and Machine Learning in the Coatings Industry
September 2019
By Cynthia Challener
Sapper, Cal Poly: We need to be asking three questions when it comes to data needs in our industry. What data do we have? What data do we need? And what questions are we trying to answer? A lot of valuable data already exists, but it is tied up in reports, published literature, or subject matter expertise. The data is there, but not collected in a way that allows helpful artificial intelligence and machine learning projects to be performed. Understanding what type of data is needed for a particular project is the first step in identifying where that data might already exist. If it doesn’t exist, that’s when experimentation can help by filling in the gaps in the available data.
Every imaginable data type is amenable to artificial intelligence and machine learning implementations. In the coatings industry, the most helpful data is likely going to include small molecule structure-property data, polymer synthesis procedures and outcomes, formulation recipes, formulation design space data, “time zero” performance data that is measured after synthesis, formulation, or product application, and-especially-service life data, so that predictions can be made about the useful longevity of the product. All of this is largely quantitative and structured data found in tables and spreadsheets.
Unstructured data in the form of textual internal reports, literature, technical data sheets, and even online data serve as a vast additional source of possibly useful data. Here, the challenge lies in creating natural language processing tools so that algorithms can “read” these technical documents and extract useful informa