Salimeh Sekeh wants to help humanity – and she sees artificial intelligence as a key way to do that. At the University of Maine, Sekeh leads research that designs machine learning models and teaches artificial intelligence to essentially improve itself, which will have impactful applications in Maine and beyond.

Before Sekeh joined the University of Maine in 2019, she was a postdoctoral student at the University of Michigan in electrical engineering and computer science. On North Campus where she spent most of her days, she often attended events about how artificial intelligence was working to improve everything from self-driving vehicles to medical machines.

Sekeh was already passionate about designing algorithms – the architecture of code that determines what it can and cannot do – and began to imagine how she could use her skills to continue improving these technologies in a positive way.

“AI is a technology that can help us shape the world we want to live in,” says Sekeh. “It doesn’t apply to just one problem. Everything can be related in some way because in everything you have data.

When she applied for tenure-track positions, she saw an opportunity to bring that interest in machine learning to UMaine, which she learned was planning to invest heavily in machine learning. , data science and artificial intelligence research.

Now Sekeh is studying deep neural networks – essentially, a subset of artificial intelligence that learns by mimicking the complex connections of our own brains. Deep neural networks are already present in many aspects of everyday life, from virtual assistants like Siri and Alexa and self-driving cars to photo tagging suggestions on Facebook that seem to be getting more accurate every day.

Sekeh explains that, whatever their function, deep neural networks need training to perform their assigned tasks and make decisions. Facial recognition software, for example, has to learn the difference between faces before it can tell exactly which face is depicted.

Sekeh used the example of a baby learning to do basic tasks, like eating. A researcher provides a deep neural network with “training data” as a parent will demonstrate the basic mechanics of feeding to a baby, until the baby is finally able to eat on their own.

However, deep neural networks are complex and require a large amount of computer memory to operate. As technology and algorithms continue to evolve and improve, it is increasingly important to figure out how to compress them without losing functionality and performance.

Sekeh wants to figure out how to prototype the architecture of these deep neural networks so that they are better able to determine the learned skills they need for a given task through a process called “continuous learning.” With Sekeh’s algorithm, deep neural networks will be able to set aside or “freeze” unnecessary features to use for the next task instead of filtering them out entirely, as is often the case with existing algorithms.

“Once I learn to eat, I don’t forget,” says Sekeh. “When you teach a baby to walk or any other task, don’t forget how to eat. The part of the brain that has already learned to eat and is “frozen” for it. The result of this process is lifelong learning, which humans do. »

In 2021, Sekeh received $80,000 from the National Science Foundation for his research. She hopes her compression techniques will make deep neural networks cheaper to run in order to expand their use in smaller devices like cell phones and drones, as well as in resource-limited environments – for example, a drone. aerial collecting data on a remote forest.

Research on Sekeh’s deep neural networks is not limited to compression, however. In 2022, Sekeh received another $679,004 grant from the NSF – this time, an Early CAREER Award – to research the robustness of machine learning, or the ability of models to handle noise or disturbance without losing their functionality, performing even in the face of contradictory conditions.

Think of a self-driving vehicle camera detecting a stop sign, but the image is blurry because the car hit a bump or it’s raining outside. A network that lacks robustness may interpret this noisy image as a slow sign, which would put the user at risk.

“We have data that makes a network vulnerable and tricks the network. Our mission is that when we learn tasks and train deep learning algorithms, we teach the network to be robust against these conflicting examples.

Sekeh says the machine learning industry tends to separate the ideas of robustness and compression, but through her research, she aims to unite the two to create better and efficient global deep neural networks.

“We’re saying, ‘Wait a second, if you’re doing compression and part of your network is discarded, isn’t it vulnerable?'” Sekeh says. “Let’s do it simultaneously: compress it and address robustness. We work on them independently and where they overlap to improve the performance of deep learning models in an efficient and robust way. »

Sekeh envisions many ways his research can apply to problem solving in Maine and beyond. Robust and efficient deep neural networks will not only make self-driving cars safer to drive, even in the snowiest parts of Maine, but will also make drones and other autonomous research vehicles more accurate and usable for farmers, foresters , marine scientists and more.

Sekeh sees education as an essential part of his work – not just teaching AI. She organizes two summer training camps for undergraduate students at Institut Roux to learn more about deep neural networks and train the next generation of scientists like her.

Contact: Sam Schipani,