Automated Testing for Manufacturing in 2024: 5 Tips Intro

June 27, 2024
10
min read
Clarke Vandenhoven
Founding Engineer
Table of Contents

In a world of ever increasing automation, being able to reliably automate your tests is a key part of any successful manufacturing team and dependable automation is key to keeping your manufacturing operations running efficiently and effectively. Though it has gotten easier and easier to get up and running with automation, below are 5 powerful tips to keep your automation running smoothly.

Tip #1: Serialize Test Outputs

When it comes to automated testing, data is power, and the more data you have collected, the easier it is to diagnose problems and make improvements.

What data should be collected? 

At a bare minimum, test data should be associated with a serial number, so that you can refer back to it later if issues arise (if you have questions about how to serialize your parts, check out this blog post). 

Once you are associating data with serialized parts, the next step is to collect key metadata, such as operator & station information. To start off, simply recording the names of the operator who performed the test and the name of the station where the test occurred will be sufficient, but as your operations become more complex, the operator and the station should be encoded with unique identifiers. At Serial, we track stations and operators by default..

Lastly, all the data coming off the test fixture should be all be written to log files on the system, making sure that the log file name itself meets 3 key criteria:

  1. Encoded Serial Number
  2. Encoded Datetime: It should be obvious from the file name when the test was run.
  3. Log File Name Uniqueness: If a & b are done properly, this should arise naturally.

How should the data be stored?

While it is wonderful to preserve all the data related to a test on the machine that executed the test, it will be very onerous to go back and do any kind of analysis after the fact. For that reason, you should find out which key metrics you want to include from the test, then compute and upload them to a database.

Some common key metrics:

  • Pass/Fail criteria
  • Statistical measures of continuous data (standard deviation, maximum, minimum, median, mean)
  • Associated log files

Serial aims to make this easy for you, by providing an API & Python Library, which seamlessly integrates into the same backend infrastructure your operators log data against.

Tip #2: Poka-yoke your fixtures 

Background

Originally developed by Japanese manufacturing engineer Shigeo Shingo in the 1960s while working at Toyota, poka-yoke (ポカヨケ) means “mistake-proofing” or “error prevention”. It is sometimes referred to in English as a forcing function or behavior-shaping constraint.

The key idea of poka-yoke is that, by controlling the mechanisms of how a process can be completed, an operator can avoid (yokeru) mistakes (poka).

Examples

The original example of poka-yoke comes from Shingo himself. In a process where workers were assembling a small switch, workers often forgot to insert a required spring under one of the switch buttons. Shingo resolved this by splitting up the process into two steps:

  1. Workers would gather the two required springs and place them into a placeholder.
  2. Workers would then insert the springs into the switch. If there were still springs left over in the placeholder, they knew the process had been completed incorrectly and could fix the defect immediately.

Another everyday example of poka-yoke is in computer connectors, such as USBs. If not inserted correctly, the pins will not align and the connector will not be able to connect. This prevents misuse & failed connections on a structural level.

How to Apply

When it comes to poka-yoke for automated testing, our goal is to limit variability in measurements. The exact steps you need to take will depend on the manufacturing process and the specifics of your fixture, but here are some general guidelines:

  1. Ensure your part can only fit one way into your fixture: Especially if it has opposite faces that are not different. Make sure your automated tests only run once the part is properly inserted.
  2. Analyze your failure points: If you followed the prior step and recorded your stations & operators, it should be easy to find stations or operators with consistent end-of-line defects (Serial makes this easy by automatically alerting when it notices a failure pattern).
  3. Go through your processes yourself: Often companies have a divide between the manufacturing engineers who design a process & the operators who actually complete the process on a day-to-day basis. If you are responsible for a process, you should regularly complete the process yourself, explicitly following the instructions you wrote, noting any confusion or ambiguities as points to improve with poka-yoke.

Tip #3: Make your Test Code Resilient

When running your test, your machine's connection to your internal servers suddenly drops out. Not only do you stop sending your data to your analysis platform, but the machine stops running tests entirely, causing the line to go down, costing your company time & money. What went wrong?

Trust No One

One of the key principles in software design, “Trust No One”, gives us a hint. The failure in this case likely stemmed from the test script expecting (or “trusting”) that the data upload would proceed successfully before running its next test, causing it to move to an indeterminate state after that upload failed. These failures can be hardware related (unplugged a sensor mid-test, fixture gets power-cycled) or software related (the library you rely on for uploading data has intermittent failures), you should not expect any piece of software to work every time.

Instead, at every step in your process, you should include these two key tenets:

  1. Failure handling - if the code above me failed unexpectedly, does this method crash my program, or does it calmly exit and write what happened to my log files? 
  2. Input validation - if the code above me gave me an unexpected input, do I crash, or do I raise an exception and shutdown smoothly?

Here’s an example to illustrate the point

Above is a very simple function that reads data from a file and returns it as a list split by new line. What could go wrong?

  • The file could not exist or not be readable
  • The data could be corrupted or simply not in the correct format

With the power of “try…except…” and trusting no one, we can easily improve this code to

Now we handle both file existence/readability issues & data corruption issues. If you want a fun challenge, try to find the specific change that the 2nd function should have to align it with other tips in this article!

Data Integrity

Now that we have made our code resilient to errors, we have to choose a strategy to keep our data integrity, since without data integrity, analyzing process failures becomes significantly more challenging, since you now have to handle normalization & missing data cases. There are two basic paths here:

  1. Pre-validation: Before sending the data to your backend, ensure that all the data you expect to be present is indeed present and in the right format. This approach is simple and easy to implement, at the expense of sometimes having no data at all for a given part.
  2. Post-validation: Continuously send data to your backend, then once all the data has been sent, validate that all expected data is present, then mark it as “complete”. This approach is more complex, but ensures that all data that was ever recorded gets uploaded to the backend. This is the approach that Serial takes, since we provide the tools to understand both whether data is missing and how to use the data that is present.

Tip #4: Repeatability Testing 

Repeatability testing is essential for ensuring reliability. By executing the same set of tests multiple times and understanding the distribution, you can both ensure repeatability and understand & measure your systematic error (which could and might be its own blog post). By repeating tests, you can identify and address intermittent issues, and confirm that your processes are stable and reliable. This consistency is crucial for maintaining product quality and meeting your regulatory and quality standards.

Gold Standard & Error

One of the most important parts of reliability testing is not just ensuring that the tests have a repeatable result (i.e. they give a precise result), but that they give an accurate result. This is the classic precision v.s. accuracy problem.

 

So when you are setting up your tester, make sure that you have done the testing manually first so that a baseline “truth” can be established.

Offsets

In the real world, test fixtures often have a systematic error in the measurements. If you notice this in your repeatability testing (e.g. your measurements are precise, but do not align with the gold standard measurement), the discrepancy can often be resolved by applying a set offset to your measurements, so that they align with the real world values.
NOTE: If you do apply an offset, be sure to document the offset in your fixture code as well as in the fixture instructions!

Tip #5: Build Analytics Tooling

In today's data-driven manufacturing environment, robust analytics are vital for gaining insights. Without appropriate analytics tooling, all the data you have worked hard to gather will either be lost to the sands of time or even be a net cost to your company (since it costs money to store/preserve the data). Let us go over some key features of tooling you should consider when building your analytics suite.

Key Features of Effective Analytics Tooling

  1. Real-Time Data Processing: Any good analytics system will be at least pseudo-real time, i.e. the tooling will update its visualizations & dashboards as soon as the data reaches the backend, which ideally is happening on the order of seconds. 
  2. Customizable Dashboards: The ability to create customizable dashboards is a key part of any analytics suite, since every companies’ business needs and definitions are not aligned, you will need dashboards that speak to your specific Key Performance Indicators (KPIs).
  3. Automated Alerts and Notifications: While it is wonderful to set up lots of pretty dashboards and graphs to keep an eye on your system, the meat and potatoes of an analytics system is its alerting system. Consider pre-defining failure cases, alerting the appropriate people loudly & frequently, and do not allow the number of alerts to build up!
  4. Historical Data Analysis: When performing data analysis, control of the time frames over which the analysis is being performed is key to success. Whether due to environmental factors, untracked changes, or simply time induced variance, control over time frames will lead you to a much more accurate analysis of your data.
  5. User Access and Permissions: It is very important to have clear and robust user access controls to ensure data security. Not only is it a security concern, but different team members will need different levels of access depending on their roles. For example, operators will need access to current manufacturing data, whereas engineers would need easy access to alerts, and managers might want to focus on analytics & historical data trends.

Building Analytics Tooling

Platforms like Serial were designed with all of these key features in mind, with tools like Grid Builder to analyze data across multiple different assemblies, a Timeline view to understand historical data, as well as a custom dashboarding & alerting platform, all tied up in a single easy to use platform with simple & robust user access controls.

Conclusion


Automated testing in manufacturing is only as powerful as the work and engineering that goes into it, but once set up, can dramatically improve the speed and efficiency of your operations. We at Serial, using the aforementioned tips, focus on helping manufacturing teams get the most out of their automation.

Serialize Test Outputs: Serial makes it easy to collect and manage test data. By default, Serial enforces the serialization of parts, component instances, stations, & operators, making it clear who was doing what when and where.

Poka-yoke: Serial’s platform is a great first place to find the errors caused by operator error or improperly specified processes, but it is not a replacement for being on the line yourself!

Keep Your Test Code Resilient: Serial focuses on providing APIs & a Python library that provide not only robust access to your backend infrastructure, but also by default includes the pre & post validation that makes it less likely that your systems will unexpectedly fail to log their data.

Repeatability Testing: With Serial, analyzing your retests is a core feature of the platform, since we believe understanding the variance in your test results is a core part of improving your processes. We even have a dedicated view to inform you how a given serial number performs over time and relative to its peers.

Build Your Analytics Tooling: Serial's tooling suite includes the Grid Builder, a visualization tool for understanding data relationships between and among full assemblies, a dashboarding & timeline creation platform, and an alerting system that proactively informs engineers of failures on the manufacturing line.

In conclusion, by using these 5 tips (with or without Serial), you can get the most out of your automation & automated testing, freeing up resources to increase productivity & efficiency. Not only that, but when problems do inevitably arise on the line, engineers will be able to spend the time resolving them, instead of spending hours or days simply gathering the necessary information to start solving the problem.

Clarke Vandenhoven
Founding Engineer

Clarke is a Founding Engineer at Serial. Prior to Serial, he worked at Tesla for over 3 years working on the Infotainment System, specifically managing the telemetry system. He's dedicated to keeping code clean, functional, & fast, and empowering engineers to make data-driven decisions about their products. While not working on Serial, he takes care of his dog, Pei Pei