top of page

Animal WelfareAi

An AI/ML–based cloud platform for monitoring animal welfare in dairy farms

My Role

Lead Product Designer, UX Researcher, Product Owner

Deliverable

Build and release first customer facing version of the product

Company

Electrifai

Timeline

Jan 2020 - Present

PRODUCT BRIEF

The product

Animal Welfare Ai is a cloud-based tool designed for monitoring dairy farms and helping farm owners with managing their farms. It helps with improving the wellness and safety of animals, farmers, and the quality of products.  This product is a substitution for classical methods where individuals spent long hours of videos. 

​

Technology

This product uses Computer Vision and Machine Learning technology to detect likely abusive and irregular human interaction with farm animals at dairy farms. With this technology, we detect behavioral, health and environmental activities that can impact the production and safety of farms. We translate this visual information into actionable insights that enable the farmer to make data-driven decisions to improve farm operations and animals' health.

MY PROCESS (Double-Diamond Model)

DISCOVER

DEFINE

DESIGN

DEVELOP

​

  • User Interviews
  • Contextual Study
  • Stakeholder sessions
  • Competitive Analysis

​

  • Information Architecture
  • Empathy map
  • User Journey
  • User flow
  • Personas

​

  • Lo-Fi and Hi-Fi design
  • Feedback gathering
  • Usability testing
  • Iterating on Design

​

  • Hand-off to the dev team
  • Product release
  • Usability testing
  • Improvements
  • Identify new features

Discover

What We Studied

My main goal during the discovery phase was to understand what the current problems are and why a new tool is needed to address those? What are some of the current challenges that users are facing that are making them seeking a new solution? 
We wondered who are the users? How do they look like? How does their day look from AM to PM?

​

How We Did the Research

At the beginning of the project, we used an unstructured interview method to gather information, frame the problem, and determine project priorities. During the sessions we let the participant lead the dialog and allow us to see through their eyes.

​

To get a thorough understanding of our end-users, we decided to do a contextual study. We visited people in their everyday environment – a dairy farm– and observed how they perform their daily activities. Watching them complete a task in familiar surroundings with their own equipment helped us understand their current day-to-day responsibilities and to identify gaps and opportunities. During these sessions, I have shadowed and interviewed multiple roles in the farm including the farm owner, diary manager, and animal welfare expert. As a part of ethnographical photography and with the permission of participants, I've also video recorded and photographed participants while shadowing or interviewing them.

What We Learned

Based on research studies we were able to learn about:

1) The current process and its gaps

2) Types of diary farms and how they have different needs

3) Definition of animal abuse

​

1- The Current Process and Gaps

In the current process, a dedicated watcher spends about 8 hours a day going over video recordings of a specific location in the farm, looking for potential incidents, writing incident reports of their observations, and sharing them with the animal welfare expert for a further review.

A medium-sized dairy farm has about 20 cameras that are recording videos 24/7 all year round. This means that a farm with 20 cameras needs about 12 full-time employees to catch up with all the recordings which are pricy yet not efficient due to human errors, distractions, lack of enough cameras to cover all areas of the farm, etc. To cut the expenses, farm owners have no or a few watchers which make it impossible to catch up with monitoring all the farm areas.

​

Below is workflow of the current process:

2- Type of Diary Farm Matters!

One of our findings during the research phase was that different dairy farms have totally different layouts and monitoring systems based on their geographical location, size, and business goals.  For example, a farm that is located in a warm and hot geo-location is called a Drylot dairy where cows are mainly kept outside.

In the current farm monitoring process, everything is reported and documented on paper.

3- Definition of Abuse

Meeting with stakeholders, animal welfare experts, and farm owners we learned that animal abuse goes beyond the physical category and covers emotional and verbal abuse, as well as not providing the right level of health and safety for the animals. Here are more details on different types of abuses:

Physical 

  • Kicking, hitting
  • Unnecessary rapid movement 
  • Forcing the animal to do something 

Verbal & Emotional 

  • Shouting and screaming 
  • Aggressive body language
  • Separation anxiety (mom and the calf)

Health & Safety

  • Not providing enough food and water
  • Mishandling sick animals
  • Right maintenance of farm machines
  • The danger of automated machines
Discover

Technology

In this product, we used Artificial Intelligence (Ai) and Machine Learning (ML) algorithms to monitor camera recordings and to detect any abusive human-animal interactions. Using object detection technology as well as Edge computing As the product designer, it was critical for me to understand how the data science model works to be able to make the right connection between the input and the output. 

​

I was wondering:

  • How does data – video– enter the system?

  • How often does the system refresh with the new data?

  • How long does it take for the model to do the video/image processing?

  • How should we QC the model results?

  • What kind of feedback does the Data Science team require to improve their algorithm?

​

This information helped me to map the overall workflow of the model as well as identifying the design elements and product requirements.

The Ai and ML algorithm had four phases: 

  • Phase 1: System detects Human-cow Appearance

  • Phase 2: System detects Human-cow Interaction

  • Phase 3: System detects Likely Human-cow Abuse

  • Phase 4: Feedback loop - Human reviews workflow and provides feedback to train the algorithm and make it more sophisticated overtime

The Importance of the Feedback Loop

The feedback loop is a key piece in building a robust ML model. It is crucial to provide a feedback loop where users ' input trains the ML algorithm in an unbiased and ethical way. The current model in production determines whether or not an incident is reported based on a given set of features. If the previous model determined that certain features are an indication of abuse, then those features are now highly correlated with the future outcome of abuse detection. 

If the feedback is not accurate enough,  incidents could wrongly get reported as abuse while just being a normal human-animal interaction. Or they won't get reported wrongly because of wrong/biased input in the past.

Technology

DEFINE

Personas

User personas play a key role on identifying different user types, their specific goals, their day-to-day tasks and responsibilities, current struggles, and tools and software. I collected details about personas for this product by talking to users, and stakeholders and they grow overtime.

User Flow

I created the user flow diagram to anticipate and to better learn about user's cognitive patterns and the path they take when using the product.

User Journey with Empathy map

To better understand and visualize user's needs and current pain points I created a user journey along with the empathy maps for all personas. Here is the one for the watcher persona:

Information Architecture

To better vision the site map, essential  pages to sketch, and find out the gaps I generated an IA.

Define

DESIGN 

Lo-fi designs (MVP)

From the research phase, it was clear that since this is a niche product with users being experts and key stakeholders in the process. Therefore, I have decided to use the Participatory Design method to create the design flow and the user interface. Using this methodology, I have run multiple workshops with 3 users, a PM, 1 data scientist.

From the input that was captured during the participatory workshop, in addition to the IA, user flows, and empathy maps, I created the first version of low-fidelity designs and gathered a couple of rounds on feedback from stakeholders as well as the scrum team members.

Design

Develop

Hi-Fi designs hand-off to the dev team

After finalizing the wireframes, I utilized companies design system to create the high fidelity user interface. After this round, we had a design freeze and handed off the screens to the back-end and the front-end teams. Of course, during the development process, there were constraints or some unclarity about user interactions for edge cases. Collaborating closely with the tech, software, and QA teams on daily scrums I re-considered some of the designs, add more clarification for users' micro-interactions, or added a new feature to fill the gap.

Usability Testing

After 6 sprints the first version of the product was released which I used to run the usability sessions with 4 users in total (2 users per persona). I created a list of questions that I or the team needed to learn more about and then prioritize them. The user testing had two main goals: 1- To get user feedback on the platform, and identify areas for UX improvement (45 min moderated session) 2- To do a card sorting test with users about the future features (15 min unmoderated session)

New Features (Dashboard & Admin page)

Based on results from the user testing sessions we learned that the supervisor persona needs a dashboard where they can get high-level and detailed statistics about how the number of incidents is changing over time, in each location, in each incident category etc. We also learned that the supervisor persona also needs an admin page to set locations and cameras and add/remove watchers. Based on users' feedback and requirements from the PM I designed screens for the dashboard and the admin page.

Develop

OUTCOME (DESIGN HIGHLIGHTS)

Dashboard

This page provides an overview of how the number of incidents changes over time and in different locations.

 

Pie chart: Displays distribution of event types in different time frames (today, past 7 days, past 30 days, or all times)

​

Line chartShows the number of Urgent, Major, and Minor events every day (from past 7 days to the past 180 days, or all times)

​

Stack bar chart: Provides details on the distribution of severity level of events in each location in different time frames.

​

​

Videos

This page displays a queue of all the incidents that were identified by our Machine Learning algorithm. For each identified incident there are four parameters including 1- Probability percent 2- Time/date of the event 3- Location of the event and 4- Camera

​

User can select a video from the queue based on their preference. Only after watching the full video (~10-15 seconds) a form will appear which user needs to fill and provide details about what they saw int he video. Information includes event type, severity level, and user's note.

​

Incident Reports

All the reviewed incidents will be listed in this page after watcher watches the videos and fills out the form on the video page. This list provides details about an incidents including time, date, location, event type etc.

 

To improve the accuracy of incident reports every report needs to be verified by a supervisor after being created by a watcher.

View/Edit a Report

Users can select an a report from the incident reports page and edit them as needed. 

A supervisor for example can see details of a report that's been provided by a watcher, and leave their own comment in the report. 

​

Print a Report

After a report is being verified by a supervisor, it could be downloaded or printed for documenting purposes. 

​

Outcome

REFLECTION

Achievements

Being a part of this project was an amazing experience since it was the first product that I owned  – product owner – and have led the end-to-end design process from research to ideation, interaction design, UI design, testing etc.  I have contributed as a Lead Product Designer and Product Owner in one major release and multiple minor releases during over 30 sprints where I get to closely work with our data science, infrastructure, and engineering team.

Learnings

During the product development process, and as we gathered more feedback from beta users I have learned that farm users are a lot more diverse – in the way that they manage their farms – than we initially anticipated from research.  In this case, it is beneficial to run exploratory research with a larger and more diverse group of participants to cover more use cases and user scenarios.

Future of The Product

After releasing the initial version of the product and on-boarding beta users we started gathering useful information on how users interact with the app on a daily basis. One of the features that we learn would be beneficial for the users is the ability to customize their preferences in different farm locations. This means that users can define what specific interactions are they sensitive about in each location of the farm. Thi feature can help users to focus on incidents they care about more.

​

One example would be that because a calf is more delicate and sensitive than a cow, any human-animal interaction near a calf box could be harmful to them and should be reported. While the level of sensitivity is much lower for an adult cow.

This product is growing, please visit again soon for the most recent updates.

Reflction
bottom of page