From 6b463059532f00f5c2e637e840b89a521f994372 Mon Sep 17 00:00:00 2001 From: Trung Le <58827450+ltrung@users.noreply.github.com> Date: Mon, 27 Dec 2021 13:54:18 -0800 Subject: [PATCH] Update guides to point to main (#1902) --- CONTRIBUTING.md | 11 +++++- README.md | 6 +-- demos/serverless/README.md | 2 +- docs/index.html | 6 +-- .../backgroundfilter_video_processor.html | 3 +- docs/modules/contentshare.html | 2 +- docs/modules/faqs.html | 26 ++++++++----- docs/modules/migrationto_2_0.html | 2 +- docs/modules/projectboard.html | 3 +- .../qualitybandwidth_connectivity.html | 39 ++++++++++--------- guides/02_Content_Share.md | 2 +- guides/04_Quality_Bandwidth_Connectivity.md | 37 +++++++++--------- guides/07_FAQs.md | 26 ++++++++----- guides/08_Migration_to_2_0.md | 2 +- guides/13_Project_Board.md | 3 +- .../15_Background_Filter_Video_Processor.md | 3 +- 16 files changed, 100 insertions(+), 73 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 644b7a9cf8..3f9c019d76 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -18,7 +18,16 @@ reported the issue. When possible use an existing template and provide all the i Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: -1. You are working against the latest source on the *master* branch. +1. You are working against the latest source on the *main* branch. + + Note that we recently renamed our branch from *master* to *main*. If you have checked out our repo before, please run the following commands: + + ``` + git branch -m master main + git fetch origin + git branch -u origin/main main + git remote set-head origin -a + ``` 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 3. You open an issue to discuss any significant work. diff --git a/README.md b/README.md index 2f51861ca6..d87499d77c 100644 --- a/README.md +++ b/README.md @@ -111,9 +111,9 @@ The following developer guides cover the Amazon Chime SDK more broadly. ## Examples - [Amazon Chime SDK Samples](https://github.com/aws-samples/amazon-chime-sdk) — Amazon Chime SDK Samples repository -- [Meeting Demo](https://github.com/aws/amazon-chime-sdk-js/tree/master/demos/browser) — A browser +- [Meeting Demo](https://github.com/aws/amazon-chime-sdk-js/tree/main/demos/browser) — A browser meeting application with a local server -- [Serverless Meeting Demo](https://github.com/aws/amazon-chime-sdk-js/tree/master/demos/serverless) — A self-contained serverless meeting application +- [Serverless Meeting Demo](https://github.com/aws/amazon-chime-sdk-js/tree/main/demos/serverless) — A self-contained serverless meeting application - [Single JS](https://github.com/aws-samples/amazon-chime-sdk/tree/main/utils/singlejs) — A script to bundle the SDK into a single `.js` file - [Recording Demo](https://aws.amazon.com/blogs/business-productivity/how-to-enable-client-side-recording-using-the-amazon-chime-sdk/) — Recording the meeting's audio, video and screen share in high definition - [Virtual Classroom](https://aws.amazon.com/blogs/business-productivity/building-a-virtual-classroom-application-using-the-amazon-chime-sdk/) — An online classroom built with Electron and React @@ -1253,7 +1253,7 @@ The use of Amazon Voice Focus and background blur via this SDK involves the down The use of Amazon Voice Focus and background blur runtime code is subject to additional notices. See [this Amazon Voice Focus NOTICES file](https://static.sdkassets.chime.aws/workers/NOTICES.txt) and [background blur NOTICES file](https://static.sdkassets.chime.aws/bgblur/workers/NOTICES.txt) for details. You agree to make these additional notices available to all end users who use Amazon Voice Focus and background blur runtime code via this SDK. -The browser demo applications in the [demos directory](https://github.com/aws/amazon-chime-sdk-js/tree/master/demos) use [TensorFlow.js](https://github.com/tensorflow/tfjs) and pre-trained [TensorFlow.js models](https://github.com/tensorflow/tfjs-models) for image segmentation. Use of these third party models involves downloading and execution of code at runtime from [jsDelivr](https://www.jsdelivr.com/) by end user browsers. For the jsDelivr Acceptable Use Policy, please visit this [link](https://www.jsdelivr.com/terms/acceptable-use-policy-jsdelivr-net). +The browser demo applications in the [demos directory](https://github.com/aws/amazon-chime-sdk-js/tree/main/demos) use[TensorFlow.js](https://github.com/tensorflow/tfjs) and pre-trained [TensorFlow.js models](https://github.com/tensorflow/tfjs-models) for image segmentation. Use of these third party models involves downloading and execution of code at runtime from [jsDelivr](https://www.jsdelivr.com/) by end user browsers. For the jsDelivr Acceptable Use Policy, please visit this [link](https://www.jsdelivr.com/terms/acceptable-use-policy-jsdelivr-net). The use of TensorFlow runtime code referenced above may be subject to additional license requirements. See the licenses page for TensorFlow.js [here](https://github.com/tensorflow/tfjs/blob/master/LICENSE) and TensorFlow.js models [here](https://github.com/tensorflow/tfjs-models/blob/master/LICENSE) for details. diff --git a/demos/serverless/README.md b/demos/serverless/README.md index d2efe1ba96..cc539e406f 100644 --- a/demos/serverless/README.md +++ b/demos/serverless/README.md @@ -1,6 +1,6 @@ ## Serverless Demo -This demo shows how to deploy [Chime SDK Browser Demo](https://github.com/aws/amazon-chime-sdk-js/tree/master/demos/browser) as self-contained serverless applications. +This demo shows how to deploy [Chime SDK Browser Demo](https://github.com/aws/amazon-chime-sdk-js/tree/main/demos/browser) as self-contained serverless applications. > *Note: Deploying the Amazon Chime SDK demo applications contained in this repository will cause your AWS Account to be billed for services, including the Amazon Chime SDK, used by the application.* diff --git a/docs/index.html b/docs/index.html index 8d4827357c..d6ee7eb255 100644 --- a/docs/index.html +++ b/docs/index.html @@ -189,9 +189,9 @@
.js
fileThe use of Amazon Voice Focus and background blur via this SDK involves the downloading and execution of code at runtime by end users.
The use of Amazon Voice Focus and background blur runtime code is subject to additional notices. See this Amazon Voice Focus NOTICES file and background blur NOTICES file for details. You agree to make these additional notices available to all end users who use Amazon Voice Focus and background blur runtime code via this SDK.
-The browser demo applications in the demos directory use TensorFlow.js and pre-trained TensorFlow.js models for image segmentation. Use of these third party models involves downloading and execution of code at runtime from jsDelivr by end user browsers. For the jsDelivr Acceptable Use Policy, please visit this link.
+The browser demo applications in the demos directory useTensorFlow.js and pre-trained TensorFlow.js models for image segmentation. Use of these third party models involves downloading and execution of code at runtime from jsDelivr by end user browsers. For the jsDelivr Acceptable Use Policy, please visit this link.
The use of TensorFlow runtime code referenced above may be subject to additional license requirements. See the licenses page for TensorFlow.js here and TensorFlow.js models here for details.
Live transcription using the Amazon Chime SDK for JavaScript is powered by Amazon Transcribe. Use of Amazon Transcribe is subject to the AWS Service Terms, including the terms specific to the AWS Machine Learning and Artificial Intelligence Services. Standard charges for Amazon Transcribe and Amazon Transcribe Medical will apply.
You and your end users understand that recording Amazon Chime SDK meetings may be subject to laws or regulations regarding the recording of electronic communications. It is your and your end users’ responsibility to comply with all applicable laws regarding the recordings, including properly notifying all participants in a recorded session, or communication that the session or communication is being recorded, and obtain their consent.
diff --git a/docs/modules/backgroundfilter_video_processor.html b/docs/modules/backgroundfilter_video_processor.html index 3d4411e71d..d9a346e2a9 100644 --- a/docs/modules/backgroundfilter_video_processor.html +++ b/docs/modules/backgroundfilter_video_processor.html @@ -77,7 +77,8 @@The background blur API allows builders to enable background blur on a video stream. To add a background blur to a video stream the builder needs to create a VideoFrameProcessor
using BackgroundBlurVideoFrameProcessor
and then insert that processor into a VideoTransformDevice
. The video frame processor uses a TensorFlow Lite (TFLite) machine learning (ML) model along with JavaScript Web Workers and WebAssembly (WASM) to apply blur to the background of each frame in the video stream. These assets are downloaded at runtime when the video frame processor is created and not provided in the source directly.
Background blur is integrated into the browser demo application. To try it out, launch the demo with npm run start
, join the meeting, click on the camera icon to enable video, then select the video filter drop down and select Background Blur
.
Background blur is integrated into the browser demo application. To try it out, launch the demo with npm run start
,
+ join the meeting, click on the camera icon to enable video, then select the video filter drop down and select Background Blur
.
This guide explains how to share audio and video content such as screen capture or media files in a meeting. This guide assumes you have already created a meeting and - added attendees to the meeting (see Setup section in our Readme for more information).
+ added attendees to the meeting (see Setup section in our Readme for more information).Content share methods are accessed from the audio-video facade belonging to the meeting session.
diff --git a/docs/modules/faqs.html b/docs/modules/faqs.html index 1e6bb7387f..fa27a1f2f5 100644 --- a/docs/modules/faqs.html +++ b/docs/modules/faqs.html @@ -182,7 +182,7 @@Once the limit of 25 video tiles is reached in a meeting, each subsequent participant that tries to turn on the local video will receive a Meeting Session status code of VideoCallSwitchToViewOnly = 10 which in turn triggers the observer 'videoSendDidBecomeUnavailable'.
+Once the limit of 25 video tiles is reached in a meeting, each subsequent participant that tries to turn on the local video will receive a Meeting Session status code of VideoCallSwitchToViewOnly = 10 which in turn triggers the observer 'videoSendDidBecomeUnavailable'.
TaskFailed
or SignalingInternalServerError
will handled for retries.The SDK uses ConnectionHealthPolicyConfiguration to trigger a reconnection. We recommend using the default configuration, but you can also provide the custom ConnectionHealthPolicyConfiguration object to change this behavior.
+The SDK uses ConnectionHealthPolicyConfiguration to trigger a reconnection. We recommend using the default configuration, but you can also provide the custom ConnectionHealthPolicyConfiguration object to change this behavior.
import {
ConnectionHealthPolicyConfiguration,
ConsoleLogger,
@@ -264,7 +264,7 @@ When does the Amazon Chime SDK retry the connection? Can I customize this re
What is the timeout for connect and reconnect and where can I configure the value?
-
The maximum amount of time to allow for connecting is 15 seconds, which can be configurable in MeetingSessionConfiguration. The reconnectTimeout is configurable for how long you want to timeout the reconnection. The default value is 2 minutes.
+ The maximum amount of time to allow for connecting is 15 seconds, which can be configurable in MeetingSessionConfiguration. The reconnectTimeout is configurable for how long you want to timeout the reconnection. The default value is 2 minutes.
Media
@@ -281,7 +281,10 @@ How do I choose video resolution, frame rate and bitrate?
How can I stream music or video into a meeting?
- You can use the AudioVideoFacade.startContentShare(MediaStream) API to stream audio and/or video content to the meetings. See the meeting demo application for an example of how to achieve this.
+ You can use the [AudioVideoFacade.startContentShare(MediaStream)](https://aws.github.
+ io/amazon-chime-sdk-js/interfaces/audiovideofacade.html#startcontentshare) API to stream audio and/or video content
+ to the meetings. See the meeting demo application for an example of how to
+ achieve this.
When I stream video in Chrome, other attendees see a black screen. Is this a known issue?
@@ -353,11 +356,12 @@ Debugging
How can I get Amazon Chime SDK logs for debugging?
- Applications can get logs from Chime SDK by passing instances of Logger
when instantiating the MeetingSession object. Amazon Chime SDK has some default implementations of logger that your application can use, such as ConsoleLogger which logs into the browser console, MeetingSessionPOSTLogger which logs in Amazon CloudWatch and MultiLogger which logs in multiple destinations.
+ Applications can get logs from Chime SDK by passing instances of Logger
when instantiating the MeetingSession object. Amazon Chime SDK has some default implementations of logger that your application can use, such as ConsoleLogger which logs into the browser console, MeetingSessionPOSTLogger which logs in Amazon CloudWatch and MultiLogger which logs in multiple destinations.
How do I file an issue for the Amazon Chime SDK for JavaScript?
- You can use our bug template to file issues with logs (it is helpful to set the logging level as INFO) and exact reproduction steps. To help you faster, you can check the usage of the API in our API overview, demos and the usage section in our Readme. In addition search our issues database as your concern may have been addressed previously and mitigations may have been posted.
+ You can use our bug template to file issues with logs (it is helpful to set the logging
+ level as INFO) and exact reproduction steps. To help you faster, you can check the usage of the API in our API overview, demos and the usage section in our Readme. In addition search our issues database as your concern may have been addressed previously and mitigations may have been posted.
Networking
@@ -433,7 +437,7 @@ I am not able to join an Amazon Chime SDK meeting from an Android 11 device
Does Amazon Voice Focus support the Samsung Internet browser?
-
Yes, Amazon Voice Focus supports the Samsung Internet browser (Chromium 83 or lower). However, it leads to a poor user experience because the preferred Chromium version is 87 or higher. Please check Amazon Voice Focus browser compatibility matrix in Amazon Voice Focus guide.
+ Yes, Amazon Voice Focus supports the Samsung Internet browser (Chromium 83 or lower). However, it leads to a poor user experience because the preferred Chromium version is 87 or higher. Please check Amazon Voice Focus browser compatibility matrix in Amazon Voice Focus guide.
Audio and video
@@ -447,8 +451,8 @@ How can I create a video tile layout for my application?
Applications that show multiple video tiles on the screen will need to decide where to place the underlying video elements and how to apply CSS styling. Here are a few things to consider as you develop the tile layout for your application:
- A video whose source is a mobile device in portrait mode will display quite differently compared to a video in landscape mode from a laptop camera. The CSS object-fit rule can be applied to the video element to change how the content scales to fit the parent video element.
- - Use the
VideoTileState
videoStreamContentWidth and videoStreamContentHeight properties to determine the aspect ratio of the content.
- - After calling bindVideoElement, set up
resize
event listeners on the HTMLVideoElement to listen to intrinsic resolution from the video content.
+ - Use the
VideoTileState
videoStreamContentWidth and videoStreamContentHeight properties to determine the aspect ratio of the content.
+ - After calling bindVideoElement, set up
resize
event listeners on the HTMLVideoElement to listen to intrinsic resolution from the video content.
- For landscape aspect ratios (width > height), apply the CSS rule
object-fit:cover
to the HTML element that will contain the video to crop and scale the video to the aspect ratio of the video element.
- For portrait aspect ratios (height > width), apply the CSS rule
object-fit:contain
to the HTML element that will contain the video to ensure that all video content can be seen.
@@ -479,7 +483,9 @@ My clients are unable to successfully join audio calls from Safari, they get
How can I show a custom UX when the browser prompts the user for permission to use microphone and camera?
Device labels are privileged since they add to the fingerprinting surface area of the browser session. In Chrome private tabs and in all Firefox tabs, the labels can only be read once a MediaStream is active. How to deal with this restriction depends on the desired UX. The device controller includes an injectable device label trigger which allows you to perform custom behavior in case there are no labels, such as creating a temporary audio/video stream to unlock the device names, which is the default behavior.
- You may want to override this behavior to provide a custom UX such as a prompt explaining why microphone and camera access is being asked for by supplying your own function to setDeviceLabelTrigger(). See the meeting demo application for an example.
+ You may want to override this behavior to provide a custom UX such as a prompt explaining why microphone and camera
+ access is being asked for by supplying your own function to setDeviceLabelTrigger(). See the meeting demo application for
+ an example.
meetingSession.audioVideo.setDeviceLabelTrigger(
async (): Promise<MediaStream> => {
// For example, let the user know that the browser is asking for microphone and camera permissions.
diff --git a/docs/modules/migrationto_2_0.html b/docs/modules/migrationto_2_0.html
index 0a5ddf9daf..a964b312af 100644
--- a/docs/modules/migrationto_2_0.html
+++ b/docs/modules/migrationto_2_0.html
@@ -270,7 +270,7 @@ Introducing supportsSetSinkId()
in DefaultBrowserBehavior
Deprecating legacy screen share
From version 2.0 onwards, the Amazon Chime SDK for JavaScript will no longer include the deprecated screen share API identified by ScreenShareFacade
and ScreenShareViewFacade
and all related code.
- Customers should use our Video Based Content Sharing detailed in our Content Share guide.
+ Customers should use our Video Based Content Sharing detailed in our Content Share guide.
diff --git a/docs/modules/projectboard.html b/docs/modules/projectboard.html
index 60d8249d6c..1788e5370c 100644
--- a/docs/modules/projectboard.html
+++ b/docs/modules/projectboard.html
@@ -73,7 +73,8 @@ Namespace ProjectBoard
Amazon Chime SDK Project Board
- The Amazon Chime SDK Project Board provides builders with an overview of community feature requests and their status across all the repositories for the Amazon Chime SDK (Amazon Chime SDK for JavaScript, Amazon Chime SDK React Component Library, Amazon Chime SDK for iOS and Amazon Chime SDK for Android). Each repository has a Community Issue template that allows builders to contribute community feature ideas for the Amazon Chime SDK.
+ The Amazon Chime SDK Project Board provides builders with an overview of
+ community feature requests and their status across all the repositories for the Amazon Chime SDK (Amazon Chime SDK for JavaScript, Amazon Chime SDK React Component Library, Amazon Chime SDK for iOS and Amazon Chime SDK for Android). Each repository has a Community Issue template that allows builders to contribute community feature ideas for the Amazon Chime SDK.
The Amazon Chime team would like to prioritize issues that are most useful for our developer community. We recommend you give us a thumbs-up to the features on our Project Board that look interesting to you.
Note: This Project Board is provided for informational purposes only. AWS does not commit to release any solution or feature described in the Project Board.
diff --git a/docs/modules/qualitybandwidth_connectivity.html b/docs/modules/qualitybandwidth_connectivity.html
index 751891266a..a5a0951c0d 100644
--- a/docs/modules/qualitybandwidth_connectivity.html
+++ b/docs/modules/qualitybandwidth_connectivity.html
@@ -92,7 +92,7 @@ Challenges
Detection Mechanisms
- The Amazon Chime SDK for JavaScript produces several kinds of events on the AudioVideoObserver to monitor connectivity and quality. Use the following events and key health metrics to monitor the performance of the meeting session in real time. For code snippets showing how to subscribe to these events, see Monitoring and Alerts.
+ The Amazon Chime SDK for JavaScript produces several kinds of events on the AudioVideoObserver to monitor connectivity and quality. Use the following events and key health metrics to monitor the performance of the meeting session in real time. For code snippets showing how to subscribe to these events, see Monitoring and Alerts.
Metrics derived from WebRTC stats are not guaranteed to be present in all browsers. In such cases the value may be missing.
For the browser support columns below, "All" refers to the browsers officially supported by the Chime SDK.
@@ -107,12 +107,12 @@ Events for monitoring local attendee uplink
- videoSendHealthDidChange
+ videoSendHealthDidChange
Indicates the current average upstream video bitrate being utilized
Chromium-based
- videoSendBandwidthDidChange
+ videoSendBandwidthDidChange
Indicates the estimated amount of upstream bandwidth
Chromium-based
@@ -129,32 +129,32 @@ Events for monitoring local attendee downlink
- connectionDidSuggestStopVideo
+ connectionDidSuggestStopVideo
Indicates that the audio connection is experiencing packet loss. Stopping local video and pausing remote video tiles may help the connection recover by reducing CPU usage and network consumption.
All
- connectionDidBecomeGood
+ connectionDidBecomeGood
Indicates that the audio connection has improved.
All
- connectionDidBecomePoor
+ connectionDidBecomePoor
Similar to the previous metric, but is fired when local video is already turned off.
All
- videoNotReceivingEnoughData
+ videoNotReceivingEnoughData
Called when one or more remote attendee video streams do not meet the expected average bitrate which may be due to downlink packet loss.
All
- estimatedDownlinkBandwidthLessThanRequired
- Aggregated across all attendees, this event fires when more bandwidth is requested than what the WebRTC estimated downlink bandwidth supports. It is recommended to use this event over videoNotReceivingEnoughData.
+ estimatedDownlinkBandwidthLessThanRequired
+ Aggregated across all attendees, this event fires when more bandwidth is requested than what the WebRTC estimated downlink bandwidth supports. It is recommended to use this event over videoNotReceivingEnoughData.
Chromium-based
- videoReceiveBandwidthDidChange
+ videoReceiveBandwidthDidChange
This is the estimated amount of downstream bandwidth
Chromium-based
@@ -171,7 +171,7 @@ Events for monitoring remote attendee uplink
- realtimeSubscribeToVolumeIndicator
+ realtimeSubscribeToVolumeIndicator
The signalStrength
field indicates whether the server is receiving the remote attendee's audio. A value of 1 indicates a good connection, a value of 0.5 or 0 indicates some or total packet loss. Since each attendee receives the signal strength for all attendees, this event can be used to monitor the ability of attendees to share their audio in real-time.
All
@@ -188,7 +188,7 @@ Metrics exposed directly from the WebRTC peer connection
- metricsDidReceive
+ metricsDidReceive
Exposes the WebRTC getStats metrics. There may be differences among browsers as to which metrics are reported.
All
@@ -198,7 +198,8 @@ Metrics exposed directly from the WebRTC peer connection
You get the following WebRTC stats in the video stats widget:
Upstream video information: Frame height, Frame width, Bitrate (bps), Packets Sent, Frame Rate (fps).
Downstream video information: Frame height, Frame width, Bitrate (bps), Packet Loss (%), Frame Rate (fps).
- You can check the implementation in the demo app to build your own custom widget (look for getObservableVideoMetrics
method in the demo). This video stats widget is built using the getObservableVideoMetrics and the metricsDidReceive APIs. Through the information provided in these stats, the application can monitor key attributes and take action. For instance if the bitrate or resolution falls below a certain threshold, the user could be notified in some manner, or diagnostic reporting could take place.
+ You can check the implementation in the demo app to build your own custom widget
+ (look for getObservableVideoMetrics
method in the demo). This video stats widget is built using the getObservableVideoMetrics and the metricsDidReceive APIs. Through the information provided in these stats, the application can monitor key attributes and take action. For instance if the bitrate or resolution falls below a certain threshold, the user could be notified in some manner, or diagnostic reporting could take place.
Events for monitoring currently active simulast layers
@@ -235,7 +236,7 @@ Application profiling
Choose a lower local video quality
- Sometimes it is better to sacrifice video quality in order to prioritize audio. You can call chooseVideoInputQuality(width, height, frameRate, maxBandwidthKbps) and lower the maximum bandwidth in real-time. You can also adjust the resolution and frame rate if you call the method before starting local video (or stop and then restart the local video). See the section below on values you can use for chooseVideoInputQuality
.
+ Sometimes it is better to sacrifice video quality in order to prioritize audio. You can call chooseVideoInputQuality(width, height, frameRate, maxBandwidthKbps) and lower the maximum bandwidth in real-time. You can also adjust the resolution and frame rate if you call the method before starting local video (or stop and then restart the local video). See the section below on values you can use for chooseVideoInputQuality
.
Pause remote videos
@@ -247,7 +248,7 @@ Mitigations to conserve bandwidth
Adjust local video quality
- You can choose a video quality of up to 1280x720 (720p) at 30 fps and 2500 Kbps using chooseVideoInputQuality(width, height, frameRate, maxBandwidthKbps) API before the meeting session begins. However, in some cases it is not necessary to send the highest quality and you can use a lower values. For example, if the size of the video tile is small then the highest quality may not be worth the additional bandwidth and CPU overhead.
+ You can choose a video quality of up to 1280x720 (720p) at 30 fps and 2500 Kbps using chooseVideoInputQuality (width, height, frameRate, maxBandwidthKbps) API before the meeting session begins. However, in some cases it is not necessary to send the highest quality and you can use a lower values. For example, if the size of the video tile is small then the highest quality may not be worth the additional bandwidth and CPU overhead.
The default resolution in the SDK is 540p at 15 fps and 1400 Kbps. Lower resolutions can be set if you anticipate a low bandwidth situation. Browser and codec support for very low resolutions may vary.
The value maxBandwidthKbps
is a recommendation you make to WebRTC to use as the upper limit for upstream sending bandwidth. The Chime SDK default is 1400 Kbps for this value. The following table provides recommendations of minimum and maximum bandwidth value per resolution for typical video-conferencing scenarios. Note that when low values are selected the video can be appear pixelated. Using 15 fps instead of 30 fps will substantially decrease the required bit rate and may be acceptable for low-motion content.