How to avoid MS Teams to select MJPG as camera data format?

Tom JD1 Xue 21 Reputation points
2025-11-04T14:57:49.6266667+00:00

I'm developing a "python + C++ DLL" application, which calls Windows MediaCapture interface to open and read camera data (in format of NV12 or YUY2) in SharedReadOnly mode. And it must work together with MS teams while teams open camera (in exclusive mode) to achieve some function of image recognition. My application works fine with teams.

But today I met a problem. A laptop's camera supports camera data format of both NV12, MJPG. And teams select MJPG, then my application can't get data any more, no matter who starts to open camera first. From the code, I can see the frame it gets is null now.

auto frame = m_FrameReader.TryAcquireLatestFrame();

I did some research work, and the root cause might be this:

  1. Teams selects MJPG format according to its own stratigy. Teams uses camera in exclusive mode, and my application uses camera in SharedReadOnly mode. In such a scenario, Microsoft Teams' choice of camera data format determines the final data format selected by my application.
  2. Then camera hardware outputs MJPG data via MJPG pin. MJPG Pin supports only 1 active instance.
  3. The MJPG Decoder (MFT) is bound to above MJPG pin and converts data to NV12 format, then finally will feed data to Teams.
  4. Above MJPG pin and MJPG Decoder usually can be accessed by single instance, e.g. Teams. This depends on the driver's design. And this causes my application can't get camera data any more.

I hope someone who understands this aspect can help confirm whether my above understanding is correct and provide appropriate suggestions. Thank you!

Microsoft Teams | Development
Microsoft Teams | Development
Building, integrating, or customizing apps and workflows within Microsoft Teams using developer tools and APIs
0 comments No comments
{count} votes

3 answers

Sort by: Most helpful
  1. Teddie-D 8,120 Reputation points Microsoft External Staff Moderator
    2025-11-05T02:11:39.9466667+00:00

    Hi @Tom JD1 Xue

    Thank you for posting your question in the Microsoft Q&A forum. 

    Your understanding is mostly correct. The issue occurs because Microsoft Teams selects an MJPG path that only supports a single active instance. When this happens, the Windows Camera Frame Server can’t deliver frames to your SharedReadOnly session, so your TryAcquireLatestFrame() call keeps returning null. 

    When you initialize MediaCapture with SharingMode set to SharedReadOnly, Windows allows multiple apps to access the same camera at the same time. This mode is designed for scenarios where your app needs to run alongside another app such as Microsoft Teams. You can’t change camera settings in this mode, but you should still be able to receive frames unless something in the camera pipeline is blocking them. 

    If TryAcquireLatestFrame() continually returns null, it usually means that no frames are available or the reader isn’t actually receiving any data. This can happen when another application is using the camera in a way that prevents the Frame Server from sharing the stream. Microsoft recommends handling the FrameArrived event and reading frames inside that handler, always checking for null. If the event fires but no frame is delivered, the pipeline may be blocked. 

    Whether Teams choosing MJPG prevents your app from receiving frames depends on whether the Windows Camera Frame Server is active. The Frame Server is responsible for sharing camera access across apps. If Teams or the camera driver bypasses the Frame Server by locking a hardware MJPG decoder or using a single-instance MJPG pin, your app may be blocked from receiving frames. 

    To improve compatibility and avoid MJPG-related issues, consider the following steps: 

    -Use an uncompressed format like NV12 or YUY2. Even if Teams uses MJPG, you can often open a parallel NV12 or YUY2 stream through the Frame Server. Select NV12 or YUY2 explicitly from the SupportedFormats list of your MediaFrameSource to avoid binding to MJPG hardware decoding.
    Reference: Process media frames with MediaFrameReader - Windows apps | Microsoft Learn

    -Lower your requested resolution or frame rate. Some camera drivers only support concurrent streams at reduced resolutions or when one stream uses NV12 or YUY2. Use camera profiles to find combinations that support concurrent access and initialize your app with one of those profiles. 
    References:  

    -Avoid using hardware MJPG decoding. If your pipeline uses Media Foundation transforms, disable hardware decoding so the system uses software decoding instead. This allows multiple apps to decode frames simultaneously.
    For Source Reader or Sink Writer pipelines, set MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS to false.
    For MediaCapture, follow the guidance on disabling hardware transforms
    Note: Microsoft is providing this information as a convenience to you. These sites are not controlled by Microsoft, and Microsoft cannot make any representations regarding the quality, safety, or suitability of any software or information found there. Please ensure that you fully understand the risks before using any suggestions from the above link.
    Reference: MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS attribute (Mfreadwrite.h) - Win32 apps | Microsoft Learn

    -Start your SharedReadOnly reader before Teams opens the camera. This can sometimes encourage Windows to keep the Frame Server path active. It’s not guaranteed, but it’s a quick test that might help. 
    Reference: MediaCaptureInitializationSettings.SharingMode Property (Windows.Media.Capture) - Windows apps | Mi… 

    -Update your camera driver or switch to the Windows inbox UVC driver. Many concurrency issues are caused by vendor-specific drivers or MJPG paths that rely on single-instance hardware transforms. 

    -If your scenario allows, plug in or emulate a virtual camera driver that mirrors the physical camera. you can use a virtual camera driver that mirrors the physical camera. One instance can feed Teams (MJPG), while another feeds your processing pipeline (NV12). 

    I hope this information is helpful. 


    Note: Please follow the steps in our documentation to enable e-mail notifications if you want to receive the related email notification for this thread. 

    0 comments No comments

  2. Tom JD1 Xue 21 Reputation points
    2025-11-05T14:37:46.18+00:00

    Hi @Teddie-D ,

    Thank you very much for your great help!

    I have done some experiments on what you suggest.

    "-Lower your requested resolution or frame rate. Some camera drivers only support concurrent streams at reduced resolutions or when one stream uses NV12 or YUY2. Use camera profiles to find combinations that support concurrent access and initialize your app with one of those profiles. "

    For this one, I select the NV12 or YUY2 with lowest resolution, then I could get camera data but after the data is saved as an image, it looks like colorful snowflakes. I don't know why.

    "-Start your SharedReadOnly reader before Teams opens the camera. This can sometimes encourage Windows to keep the Frame Server path active. It’s not guaranteed, but it’s a quick test that might help. "

    This doesn't work, when Teams select MJPG, then my application can't get data any more, no matter who starts to open camera first. Even if I set the format to NV12 again after Teams starts, then I still get null frame.

    "-Update your camera driver or switch to the Windows inbox UVC driver. Many concurrency issues are caused by vendor-specific drivers or MJPG paths that rely on single-instance hardware transforms. "

    I did this before I made this post, but still not work.

    "-If your scenario allows, plug in or emulate a virtual camera driver that mirrors the physical camera. you can use a virtual camera driver that mirrors the physical camera. One instance can feed Teams (MJPG), while another feeds your processing pipeline (NV12). "

    This is a great method, but it is too complex. We need to solve this issue within short time.

    "-Avoid using hardware MJPG decoding. If your pipeline uses Media Foundation transforms, disable hardware decoding so the system uses software decoding instead. This allows multiple apps to decode frames simultaneously. For Source Reader or Sink Writer pipelines, set MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS to false. For MediaCapture, follow the guidance on disabling hardware transforms. "

    I haven't tried this one. Not very clear how to implement it.

    One more question, as you said "-Use an uncompressed format like NV12 or YUY2. Even if Teams uses MJPG, you can often open a parallel NV12 or YUY2 stream through the Frame Server."

    So for example, app1 use NV12 and app2 use YUY2, which means camera will provide two pins, one pin outputs NV12 data and another pin outputs YUY2 data, is this workable?

    I have discussed this question with different people and always have disagreements about it.

    Finally, we make a decision to not support MJPG format in current stage. So this issue is not urgent to resolve, but I still greatly appreciate your selfless assistance. Thanks!


  3. Tom JD1 Xue 21 Reputation points
    2025-11-12T15:13:48.4133333+00:00

    Hi @Teddie-D

    Really thanks for your kindly help!

    I see your points, thanks so much for your expertise!

    Roughly to say, for example, for YUY2 data, I didn't convert it. I just save it and use some tool to check it and see snowflakes.

    I summarize the code flow and questions of how I fetched YUY2/BGRA8 frames in below text file, please check.

    BGRA8.txt

    YUY2.txt

    Sorry, I missed your last post. I will further study it (camera access forbidden issue) tomorrow.

    BR,

    <PII removed>


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.