Improving Personnel Safety at Nuclear Plants with AI

Improving Personnel Safety at Nuclear Plants with AI

Nuclear Power is the safest source of clean, reliable energy. A quick glance at the energy deathprint, a chart that arranges energy sources by deaths per trillion kWh, reveals that nuclear energy has the lowest mortality rate at global average of 90 deaths/trillion kWh. Much of this success can be attributed to rigorous training and human performance programs, and an industry-wide appreciation for standardization and communication.

Nuclear Plants require a range of inspection and quality control programs to ensure safe and reliable operation. The systems involve specialized, non-destructive and penetrative tooling, harnessing ultrasonic, eddy current, laser mapping, and visual inspection technologies. Technicians frequently work in environments with the potential risk of injury from equipment drops, falls, and ruptured vessels.

A warehouse employee with hardhat PPE.

Source: Shutterstock.

Nuclear personnel often wear personal protective equipment (PPE) when completing their daily tasks in and around the 449 worldwide nuclear plants. These equipment, includes hardhats (safety head-wear), safety glasses, earplugs, gloves, and safety boots. Unfortunately, beyond signage, crew camaraderie and corporate safety policy, PPE is rarely enforced, leading to easily avoidable accidents.

When over 80% of incidents in the nuclear industry can be attributed to human performance shortfalls (IAEA), it’s important to consider intelligent systems that will not only enforce PPE readiness, but also recognize human error factors including fatigue and distress.

In the following paragraphs, we develop a preliminary, potential system to do exactly that.

Setting up the Environment

Make sure you have Python 3.X installed. Ubuntu comes with it pre-installed, but you can always check your python version in terminal with:

python3 --version

Now you can go ahead and install the dependencies. We’ll be using the Google Cloud Vision API for this example, but you can definitely build a similar application with IBM Watson Visual Recognition or Azure Cognitive Services.

pip install google-api-python-client
pip install google-auth
pip install google-auth-httplib2
pip install google-cloud-vision

Tip:

You may need to use pip3 instead of pip depending on your environment.

Enable the Cloud Vision API

Go ahead and follow the instructions that Google provides on this page: https://cloud.google.com/vision/docs/before-you-begin

And then you can create a service account key. This will download a JSON file to your computer. Save it anywhere, but for your own sanity, make sure the path is short.

Once that is done, go back to terminal and type:

export GOOGLE_APPLICATION_CREDENTIALS="PATH"

Get to Coding

Start your favourite text editor. I like to use Atom. It’s free, and you can get it here: https://atom.io/

Begin by importing your dependencies:

import argparse
import io

from google.cloud import vision
from google.cloud.vision import types

And then define your face detection function:

def detect_faces_uri(uri):
    """Detects faces in the file located in Google Cloud Storage or the web."""
    client = vision.ImageAnnotatorClient()
    # [START migration_image_uri]
    image = types.Image()
    image.source.image_uri = uri
    # [END migration_image_uri]

    response = client.face_detection(image=image)
    faces = response.face_annotations

    # Names of likelihood from google.cloud.vision.enums
    likelihood_name = ('UNKNOWN', 'VERY_UNLIKELY', 'UNLIKELY', 'POSSIBLE',
                       'LIKELY', 'VERY_LIKELY')
    print('Faces:')

    for face in faces:
        print('anger: {}'.format(likelihood_name[face.anger_likelihood]))
        print('joy: {}'.format(likelihood_name[face.joy_likelihood]))
        print('surprise: {}'.format(likelihood_name[face.surprise_likelihood]))
        print('anger: {}'.format(likelihood_name[face.anger_likelihood]))
        print('helmet: {}'.format(likelihood_name[face.headwear_likelihood]))

        vertices = (['({},{})'.format(vertex.x, vertex.y)
                    for vertex in face.bounding_poly.vertices])

        print('face bounds: {}'.format(','.join(vertices)))

And finally, the entire source looks like this. Copy it and save it as “filename”.py:

import argparse
import io

from google.cloud import vision
from google.cloud.vision import types

def detect_faces_uri(uri):
    """Detects faces in the file located on the web."""
    client = vision.ImageAnnotatorClient()
    image = types.Image()
    image.source.image_uri = uri

    response = client.face_detection(image=image)
    faces = response.face_annotations

    likelihood_name = ('UNKNOWN', 'VERY_UNLIKELY', 'UNLIKELY', 'POSSIBLE',
                       'LIKELY', 'VERY_LIKELY')
    print('Faces:')

    for face in faces:
        print('anger: {}'.format(likelihood_name[face.anger_likelihood]))
        print('joy: {}'.format(likelihood_name[face.joy_likelihood]))
        print('surprise: {}'.format(likelihood_name[face.surprise_likelihood]))
        print('anger: {}'.format(likelihood_name[face.anger_likelihood]))
        print('helmet: {}'.format(likelihood_name[face.headwear_likelihood]))

        vertices = (['({},{})'.format(vertex.x, vertex.y)
                    for vertex in face.bounding_poly.vertices])

        print('face bounds: {}'.format(','.join(vertices)))

def run_local(args):
    if args.command == 'faces':
        detect_faces(args.path)

def run_uri(args):
    if args.command == 'faces-uri':
        detect_faces_uri(args.uri)


if __name__ == '__main__':
    parser = argparse.ArgumentParser(
        description=__doc__,
        formatter_class=argparse.RawDescriptionHelpFormatter)
    subparsers = parser.add_subparsers(dest='command')

    faces_file_parser = subparsers.add_parser(
        'faces-uri', help=detect_faces_uri.__doc__)
    faces_file_parser.add_argument('uri')

    args = parser.parse_args()

    if ('uri' in args.command):
        run_uri(args)
    else:
        run_local(args)

Now run it!

To run your PPE and emotion detection algorithm, just run the following, and add the path to the image.

python filename.py faces-url http://yourpath.here.jpg

Obviously, to make this app even better, you can connect it to a real-time computer vision service like OpenCV to securely detect hardhats and emotions without saving the files!

Face Detection Algorithm

Notice the VERY_LIKELY result for the Helmet parameter.

Now, how easy was that?! With technologies like these set up at the entrance to hazardous work environments, personal protective equipment readiness can be enforced.

I must remark that I am in no way discounting the importance of a corporate safety policy, safety signage, or crew camaraderie, but I strongly believe that inexpensive, intelligent tools can help continue to maintain the safety standard we’re accustomed to at nuclear power plants.


Geoffrey Momin is a graduating Nuclear Engineer from the University of Ontario Institute of Technology. He is actively researching the application of blockchain, artificial intelligence and augmented reality systems to improve personnel safety and performance at energy utilities.

Follow him on Twitter, Github, and LinkedIn.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Up Next:

Nuclear Energy is About to Become Even Safer.

Nuclear Energy is About to Become Even Safer.