Skip to main content

 

Cisco Meraki Documentation

MV Object Detection

Overview

All MV cameras coming after 1st generation are capable of processing powerful analytics on the camera itself and transmitting this metadata to the Meraki cloud. This revolutionary architecture dramatically reduces the cost and complexity of gathering detailed analytics in any environment.

With this architecture, all MV cameras have the ability to transmit motion metadata to the Meraki cloud by enabling Motion Search, a powerful tool in retrieving video and Motion Heatmaps, which is part of MV analytics.

Smart cameras have one of the new and exciting capabilities of this platform is the ability to do machine learning-based analytics. With this comes object detection which includes people and vehicle detection. This allows you to understand how objects are moving through and using your physical spaces.

There are a number of features that enable you to interact with MV object detection and we will walk through them one by one in this article.

Object Detection

This feature allows you to narrow down your search to detect people and vehicles (bikes, cars and trucks) in the camera's field of view.

11_05_23.jpg

The first place you can see MV object detection is when viewing historical video and clicking on “Show Objects” on a single camera’s video page. Objects detected as people will be enclosed by yellow boxes whereas vehicles will be detected in purple boxes as shown.

image3.gif

Vehicle detection is only available on the following outdoor cameras models: MV72, MV52, MV63

MV Analytics Tab

Both object detection and motion metadata are aggregated for you to analyze in the Meraki Dashboard, under the Analytics tab for each camera. Here, Meraki uses object detection analytics to help create histograms of objects detected by object type - person or vehicle. For example, in object detection, you will be able to choose person or vehicle and analyze data to provide information about how many people/vehicles entered or were present at a specific time. The dashboard can show you this data at a minute, hourly, or daily scale, which allows you to identify time-based trends and anomalies in the usage of your space. This tab’s information also serves as a tool to quickly find relevant video clips with histograms and time links. Motion heatmaps are also provided at the bottom of the Analytics page to correlate the people detection data with motion data.

Screen Shot 2019-08-28 at 1.45.20 PM.png

Object Detection Features

  • Time Resolution, Date, Start Hour and End Hour

    • Configure the scale for the slices of your histogram and the time range for your analytics.

  • Most Utilized Hour
    • This value represents the hour that had the highest average occupancy.
  • Estimated Peak Occupancy
    • This value takes the maximum of the estimated occupancy of the scene across the selected time range. The estimated occupancy of a scene is calculated every minute and is an average of the number of objects detected for every second of the minute.

      • Example 1: 10 people are in a camera's field of view (FoV) for 5 seconds, followed by 0 people for the remaining 55 seconds of that minute

        • The estimated occupancy for that minute would be rounded to 1 person.

      • Example 2: 10 people are in a camera's FoV for 55 seconds, followed by 0 people for the remaining 5 seconds of that minute

        • The estimated occupancy for that minute would be rounded to 9 people

  • Entrances (bar chart/histogram)

    • Presents the total number of entrances per hour/day for the hour/date range specified. This can be up to 24 hrs, 7 days or 1 hour - depending on the selected time resolution.

      • Example: A single person may be detected as multiple entrances within the period that they are within the frame.

        • If a person walks into a camera’s FoV, walks across the frame, and then walks out unobstructed by other people or objects, the person will likely be counted as one entrance.

        • If a person walks into a camera's FoV with a column in the middle, stands behind a column for a short while, and then reappears from behind the column before exiting the frame, this person will likely be counted as two entrances to this total.

        • If a person bends down and is for a moment not detected as a person due to his/her crouched shape, and then stands up again, this person will likely be counted as two entrances as well. 

  • Total Entrances

    • This value represents the number of entrances of objects detected within the scene.

MV Sense API

The final way to interact with MV object detection analytics is to use the API endpoints provided with MV Sense to build intelligent business solutions. Read the MV Sense article for more information.

Example Use Cases

Here are some examples of what you can do with object detection analytics.

Example 1: Find anomalies

Say that one day on your histogram shows people detection during a time in the day when you would not expect anyone to be present (for example, 4am on a weekday). By clicking on that hour in the histogram, you can immediately see what was happening during that time (for example, someone grabbing coffee, likely to prepare for his very early 4am meeting).

image2.gif

Example 2: Find when space was most occupied

By clicking on the number under ‘Peak Occupancy”, you can go straight to the video clip when the highest occupancy was observed, and get more insights about what was going on.

2019-03-08 16.51.16.gif

Technical FAQs

How does it work?

Software on the camera analyzes images multiple times per second and identifies where objects are located. The camera then tracks the location of these objects over time to understand when they entered, where they went within view, and when they left. The camera rolls up its findings and reports them to the dashboard, where you can view the data in a summary form. These sub-second detections can also be streamed for detailed analysis and storage via MQTT. This functionality requires a MV Sense license to be applied to the camera. Our object detection is driven by computer vision and machine learning.

What do you mean by "machine learning"?

Meraki smart cameras use deep learning, a type of machine learning at the forefront of artificial intelligence research, to drive our computer vision object detection. The smart camera development teams continually show a computer thousands of examples of what objects look like and it "learns" how to identify them more and more accurately over time. The model improves as we provide it with additional training data.

The smart camera analytics models are trained on data that is legally owned or licensed by Cisco Meraki, and only uses data from customers who have explicitly opted-in to continually improve our models.

Does this do person identification or facial recognition?

No! Meraki smart cameras do not identify or track specific individuals. Persons are detected as objects (if the analytics model supports it) and tracked in the scene until the object leaves the camera's field of view.

Does this take any extra bandwidth? Do I need another server?

No! All of the image processing is done right on the camera's processor. Like all Meraki products, the hardware sends small amounts of metadata back to the dashboard for further processing and storage.

Why am I not getting the numbers I would expect?

Once a camera is physically installed in a proper location and adjusted for the correct FoV, the MV will automatically start to gather analytics data. For more information on installing an MV camera to optimize object detection performance, see the deployment guidelines below. In any deployment, you should use the data comparatively for observing trends and anomalies, as opposed to using it as an absolute measurement. Refer to the Object Detection Features > Entrances section above for some more explanation.

How can I troubleshoot MV object detection?

New in June 2020, you can now view the detailed output of the running object detection model. This new capability is instrumental to advanced troubleshooting and debugging of applications consuming this data via either MQTT or Dashboard API calls.

An MV Sense license is required to be applied to a MV camera to access the advanced analytics debug mode.


In order to access the advanced analytics debug mode, navigate to the "Show People” tool when viewing historical footage. You should now see an additional toggle for enabling/disabling the debug mode overlay. The debug mode will render each detected object's Object ID #, confidence %, and bounding coordinates (X0,Y0),(X1,Y1).

If there are no detections on the camera and no MQTT output for a camera, please ensure the camera is receiving the correct PoE power as stated in its datasheet. To see if you're hitting this issue, search the Event Log for Event Type "PoE power error. Incorrect PoE standard detected." which will be logged when the camera boots.

analytic_debug_mode.gif

Deployment Guidelines for MV Object Detection

This document details guidelines for the installation of cameras with onboard object detection in retail and security use cases.

Cameras should be installed wherever there is a need to gather enhanced consumer behavior insights and information about activity that has occurred or is occurring at a given location.

Common Deployment Locations

People Detection

MV people detection is commonly used in areas with high foot traffic and will work well in areas with little obstruction. Some examples include:

  • Entrances, exits leading inside and outside of a building

  • Staircases and other walkway intersections

  • Areas where people stand (e.g. clothing racks, promotional stands)

  • Queues (e.g. checkout lines)

Vehicle Detection

MV vehicle detection can be most useful in some of the following scenarios

  • Vehicle entrances, exits leading inside and outside of a building/warehouse

  • Parking lots/garages

  • Drive-in restaurants

Common Deployment Challenges

When choosing how to deploy your MV for object detection, consider the following challenges which are common in detecting both people and vehicles.

  • Avoid installing a camera where glare off other objects (like glass) may be present at various times throughout the day to avoid backlighting and lens flare.

  • Avoid aiming the camera at highly reflective surfaces (like mirrors).

People Detection

  • Avoid pointing the camera out towards a street or walkway outside of the location of interest. This deployment might incorrectly capture people walking outside the store or into another store. 

  • Avoid angling the camera such that many people occlude/obscure others. For example, a camera will have more trouble if it is looking at a queue straight on as opposed to from the side.

  • When placing the camera by entrances or exits, try to place the camera indoors where consistent, even lighting can be better controlled.
  • If a camera is capturing an area where multiple walkways intersect (such as the end of an aisle), the camera should capture as much of the whole intersection as possible to maximize the duration of time that a single person is seen on the screen.

  • Avoid deploying the camera in a scenario with objects that appear human-like (e.g. displays showing video of people walking, posters depicting people, mannequins).

Vehicle Detection

  • For outdoor cameras, bright light such as sunlight, spotlights, fluorescent lighting, and street light may cause the image to be whitewashed or very dark.

  • MV72s are designed to operate between -40°C - 50°C (-40°F - 122°F) but the image quality may be affected depending on various climatic conditions like fog or smog.

    • Moisture in the surrounding region can also reflect IR light into the lens.

Installation Guidelines

  1. Before installing any cameras, identify the best deployment locations given the guidelines above to ensure the camera is viewing the best scene for object detection.

  2. Now that you have optimized your deployment locations for object detection, read this chapter of Designing Meraki MV Security Camera Solutions for how to conduct a proper site survey. The site survey will help determine the best locations to install your cameras.

  3. Before permanent installation, it is highly recommended to temporarily affix the camera or camera-mount assembly, turn on the camera, and view the video stream for the best results. Adjust according to the guidelines below.

    • Ensure your line of sight distance to expected foot traffic is at least 5 feet.

      • For the MV12 fixed lens, ensure that this line of sight also does not exceed 40 feet. The maximum distance for varifocal lenses will be different and should first be tested based on the optical zoom applied.

    • Ensure that the camera is mounted at a height of at least 6 feet above the ground. Performance typically improves when mounting at or above 10 feet.

    • Ensure that the camera is angled such that the desired scene is free of or mitigates obstructions.

    • Ensure that the camera’s optical zoom (if applicable), sensor crop and focus are set optimally. Follow these articles on Adjusting the Field of View of MV22 and MV72 cameras and Focusing MV22/72 and MV21/71 Cameras

    • Double-check that you have proper lighting in the area and a high dynamic range (if needed) and night mode are set according to your needs.

  4. Have someone walk around and observe that object detection is working as expected. If not, repeat step 3 until the desired results are obtained. Make use of the analytics debug mode and pay close attention to the output to determine if you are seeing any immediate problems stand out

  5. Once satisfied with the performance of object detection, permanently install the cameras.

If the deployment is consistent with these deployment guidelines, the MV is expected to be able to detect objects accurately and provide detection results that can be used to make informed decisions. If deployed well, the MV analytics are much more reflective of the ground truth and should be used quantitatively over an extended period of time with a high enough volume of people and vehicles to observe trends and anomalies.

Model Selection Guidelines

Nothing is a good substitute for proper physical deployment of a camera. Ensure you have read through and satisfied the preceding deployment guidelines above before switching model types!

  1. First determine the analytics use case or application the camera you're working with is expected to accomplish. The default object detection model running on each camera hardware model has been developed and optimized to suit the majority of general use cases for that object detection type (person, vehicle, and so on.) 

  2. If your general use case is not being met with the current default object detection model, try switching to the experimental object detection model.

  3. Navigate to a camera's Settings > MV Sense tab, select from the drop-down which model version you wish to run, and save your configuration.

  4. Have someone walk around and observe that object detection is working as expected. Make use of the analytics debug mode and pay close attention to the output to determine if you are seeing any immediate problems stand out.

    1. Pay special attention to object IDs over a sufficient sample period (generally 1-2 hours of when the scene is busiest.) This should provide a good measure of the model performance in the most challenging conditions for this camera's environment.

    2. As mentioned previously, objects that are occluded or leave and re-enter the scene will likely be counted as separate objects and increment the unique object ID. Reposition your camera if possible to mitigate this impact.

Object detection model selection.png

  • Was this article helpful?