Compare commits

..

76 Commits

Author SHA1 Message Date
Simon Chan
79f9ec5801 Add workaround to capture audio on Android 11
On Android 11, it is possible to start the capture only when the running
app is in foreground. But scrcpy is not an app, it's a Java application
started from shell.

As a workaround, start an existing Android shell existing activity just
to start the capture, then close it immediately.

Co-authored-by: Romain Vimont <rom@rom1v.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
2023-02-26 22:41:07 +01:00
Romain Vimont
b7d2086508 Add audio player
Play the decoded audio using SDL.

The audio player frame sink receives the audio frames, resample them
and write them to a byte buffer (introduced by this commit).

On SDL audio callback (from an internal SDL thread), copy samples from
this byte buffer to the SDL audio buffer.

The byte buffer is protected by the SDL_AudioDeviceLock(), but it has
been designed so that the producer and the consumer may write and read
in parallel, provided that they don't access the same slices of the
ring-buffer buffer.

Co-authored-by: Simon Chan <1330321+yume-chan@users.noreply.github.com>
2023-02-26 22:41:07 +01:00
Romain Vimont
b12d1ae7cb Add two-step write feature to bytebuf
If there is exactly one producer, then it can assume that the remaining
space in the buffer will only increase until it write something.

This assumption may allow the producer to write to the buffer (up to a
known safe size) without any synchronization mechanism, thus allowing
to read and write different parts of the buffer in parallel.

The producer can then commit the write with lock held, and update its
knowledge of the safe empty remaining space.
2023-02-26 22:41:07 +01:00
Romain Vimont
28289ab881 Introduce bytebuf util
Add a ring-buffer for bytes. It will be useful for buffering audio.
2023-02-26 22:41:07 +01:00
Romain Vimont
0cf7dabfc3 Pass AVCodecContext to frame sinks
Frame consumers may need details about the frame format.
2023-02-26 22:41:07 +01:00
Romain Vimont
dc8e6c3cfe Add an audio decoder 2023-02-26 22:41:07 +01:00
Romain Vimont
cb1d98a59c Give a name to decoder instances
This will be useful in logs.
2023-02-26 22:41:07 +01:00
Romain Vimont
9400584364 Rename decoder to video_decoder 2023-02-26 22:41:07 +01:00
Romain Vimont
98c2762eaa Log display sizes in display list
This is more convenient than just the display id alone.
2023-02-26 22:41:07 +01:00
Romain Vimont
243c8cf1b3 Add --list-device-displays 2023-02-26 22:41:07 +01:00
Romain Vimont
63bc6d1053 Move log message helpers to LogUtils
This class will also contain other log helpers.
2023-02-26 22:41:07 +01:00
Romain Vimont
6926f5e4fd Quit on audio configuration failure
When audio capture fails on the device, scrcpy continue mirroring the
video stream. This allows to enable audio by default only when
supported.

However, if an audio configuration occurs (for example the user
explicitly selected an unknown audio encoder), this must be treated as
an error and scrcpy must exit.
2023-02-26 22:41:07 +01:00
Romain Vimont
004eb47d4a Add --list-encoders
Add an option to list the device encoders properly.
2023-02-26 22:41:07 +01:00
Romain Vimont
d720d424d8 Move await_for_server() logs
Print the logs on the caller side. This will allow to call the function
in another context without printing the logs.
2023-02-26 22:41:07 +01:00
Romain Vimont
43660079c5 Add --audio-encoder
Similar to --video-encoder, but for audio.
2023-02-26 22:41:07 +01:00
Romain Vimont
a5aba2948a Extract unknown encoder error message
This will allow to reuse the same code for audio encoder selection.
2023-02-26 22:41:07 +01:00
Romain Vimont
7a17895111 Add --audio-codec-options
Similar to --video-codec-options, but for audio.
2023-02-26 22:41:07 +01:00
Romain Vimont
fb2c9ef9e7 Extract application of codec options
This will allow to reuse the same code for audio codec options.
2023-02-26 22:41:07 +01:00
Romain Vimont
12d79686d1 Add support for AAC audio codec
Add option --audio-codec=aac.
2023-02-26 22:41:07 +01:00
Romain Vimont
923032e5ca Add --audio-codec
Introduce the selection mechanism. Alternative codecs will be added
later.
2023-02-26 22:41:07 +01:00
Romain Vimont
4689cd07d4 Add --audio-bit-rate
Add an option to configure the audio bit-rate.
2023-02-26 22:41:07 +01:00
Romain Vimont
d99c3da08f Disable MethodLength checkstyle on createOptions()
This method will grow as needed to initialize options.
2023-02-26 22:41:07 +01:00
Romain Vimont
4082cb32f9 Rename --encoder to --video-encoder
This prepares the introduction of --audio-encoder.
2023-02-26 22:41:07 +01:00
Romain Vimont
f3b4160d77 Rename --codec-options to --video-codec-options
This prepares the introduction of --audio-codec-options.
2023-02-26 22:41:07 +01:00
Romain Vimont
207ae8b73c Rename --bit-rate to --video-bit-rate
This prepares the introduction of --audio-bit-rate.
2023-02-26 22:41:07 +01:00
Romain Vimont
cf15859214 Rename --codec to --video-codec
This prepares the introduction of --audio-codec.
2023-02-26 22:41:07 +01:00
Romain Vimont
6cdd4e867b Remove default bit-rate on client side
If no bit-rate is passed, let the server use the default value (8Mbps).

This avoids to define a default value on both sides, and to pass the
default bit-rate as an argument when starting the server.
2023-02-26 22:41:07 +01:00
Romain Vimont
560002047b Record at least video packets on stop
If the recorder is stopped while it has not received any audio packet
yet, make sure the video stream is correctly recorded.
2023-02-26 22:41:07 +01:00
Romain Vimont
3233aa1c4f Disable audio before Android 11
The permission "android.permission.RECORD_AUDIO" has been added for
shell in Android 11.

Moreover, on lower versions, it may make the server segfault on the
device (happened on a Nexus 5 with Android 6.0.1).

Refs <4feeee8891%5E%21/>
2023-02-26 22:41:07 +01:00
Romain Vimont
a8a1da1a00 Disable audio on initialization error
By default, audio is enabled (--no-audio must be explicitly passed to
disable it).

However, some devices may not support audio capture (typically devices
below Android 11, or Android 11 when the shell application is not
foreground on start).

In that case, make the server notify the client to dynamically disable
audio forwarding so that it does not wait indefinitely for an audio
stream.

Also disable audio on unknown codec or missing decoder on the
client-side, for the same reasons.
2023-02-26 22:41:07 +01:00
Romain Vimont
230abb30c2 Add record audio support
Make the recorder accept two input sources (video and audio), and mux
them into a single file.
2023-02-26 22:41:07 +01:00
Romain Vimont
cb0f4799a2 Rename video-specific variables in recorder
This paves the way to add audio-specific variables.
2023-02-26 22:41:07 +01:00
Romain Vimont
5fc38264c0 Do not merge config audio packets
For video streams (at least H.264 and H.265), the config packet
containing SPS/PPS must be prepended to the next packet (the following
keyframe).

For audio streams (at least OPUS), they must not be merged.
2023-02-26 22:41:07 +01:00
Romain Vimont
3feae6d41b Add an audio demuxer
Add a demuxer which will read the stream from the audio socket.
2023-02-26 22:41:07 +01:00
Romain Vimont
8cf821471f Give a name to demuxer instances
This will be useful in logs.
2023-02-26 22:41:07 +01:00
Romain Vimont
bd51b342b4 Rename demuxer to video_demuxer
There will be another demuxer instance for audio.
2023-02-26 22:41:07 +01:00
Romain Vimont
fa85a128da Extract OPUS extradata
For OPUS codec, FFmpeg expects the raw extradata, but MediaCodec wraps
it in some structure.

Fix the config packet to send only the raw extradata.
2023-02-26 22:41:07 +01:00
Romain Vimont
cd099f7a2b Use a streamer to send the audio stream
Send each encoded audio packet using a streamer.
2023-02-26 22:41:07 +01:00
Romain Vimont
20042addd4 Encode recorded audio on the device
For now, the encoded packets are just logged into the console.
2023-02-26 22:41:07 +01:00
Simon Chan
4d39bb9d26 Capture device audio
Create an AudioRecorder to capture the audio source REMOTE_SUBMIX.

For now, the captured packets are just logged into the console.

Co-authored-by: Romain Vimont <rom@rom1v.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
2023-02-26 22:41:07 +01:00
Simon Chan
50d180abf2 Add a new socket for audio stream
When audio is enabled, open a new socket to send the audio stream from
the device to the client.

Co-authored-by: Romain Vimont <rom@rom1v.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
2023-02-26 22:41:07 +01:00
Simon Chan
634ba5d7c4 Add --no-audio option
Audio will be enabled by default (when supported). Add an option to
disable it.

Co-authored-by: Romain Vimont <rom@rom1v.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
2023-02-26 22:41:07 +01:00
Romain Vimont
cadf95cfc4 Use FakeContext for Application instance
This will expose the correct package name and UID to the application
context.
2023-02-26 22:41:07 +01:00
Romain Vimont
13c8209071 Use shell package name for workarounds
For consistency.
2023-02-26 22:41:07 +01:00
Romain Vimont
c21f604f30 Use ROOT_UID from FakeContext
Remove USER_ID from ServiceManager, and replace it by a constant in
FakeContext.

This is the same as android.os.Process.ROOT_UID, but this constant has
been introduced in API 29.
2023-02-26 22:41:07 +01:00
Romain Vimont
db0bf6f34b Use PACKAGE_NAME from FakeContext
Remove duplicated constant.
2023-02-26 22:41:06 +01:00
Romain Vimont
09388f5352 Use AttributionSource from FakeContext
FakeContext already provides an AttributeSource instance.

Co-authored-by: Simon Chan <1330321+yume-chan@users.noreply.github.com>
2023-02-26 22:41:06 +01:00
Simon Chan
2f3092e6b4 Add a fake Android Context
Since scrcpy-server is not an Android application (it's a java
executable), it has no Context.

Some features will require a Context instance to get the package name
and the UID. Add a FakeContext for this purpose.

Co-authored-by: Romain Vimont <rom@rom1v.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
2023-02-26 22:41:06 +01:00
Romain Vimont
0ee541fe26 Improve error message for unknown encoder
The provided encoder name depends on the selected codec. Improve the
error message and the suggestions.
2023-02-26 22:41:06 +01:00
Romain Vimont
58249715ac Rename "codec" variable to "mediaCodec"
This will allow to use "codec" for the Codec type.
2023-02-26 22:41:06 +01:00
Romain Vimont
bbf9eeadf1 Make streamer independent of codec type
Rename VideoStreamer to Streamer, and extract a Codec interface which
will also support audio codecs.
2023-02-26 22:41:06 +01:00
Romain Vimont
3b8bb5feb5 Pass all args to ScreenEncoder constructor
There is no good reason to pass some of them in the constructor and some
others as parameters of the streamScreen() method.
2023-02-26 22:41:06 +01:00
Romain Vimont
fa31aaaba8 Move screen encoder initialization
This prepares further refactors.
2023-02-26 22:41:06 +01:00
Romain Vimont
5b6ae2fef3 Write streamer header from ScreenEncoder
The screen encoder is responsible to write data to the video streamer.
2023-02-26 22:41:06 +01:00
Romain Vimont
bf1a4ae266 Use VideoStreamer directly from ScreenEncoder
The Callbacks interface notifies new packets. But in addition, the
screen encoder will need to write headers on start.

We could add a function onStart(), but for simplicity, just remove the
interface, which brings no value, and call the streamer directly.

Refs 87972e2022
2023-02-26 22:41:06 +01:00
Romain Vimont
a29da81f1a Simplify error handling on socket creation
On any error, all previously opened sockets must be closed.

Handle these errors in a single catch-block. Currently, there are only 2
sockets, but this will simplify even more with more sockets.

Note: this commit is better displayed with --ignore-space-change (-b).
2023-02-26 22:41:06 +01:00
Romain Vimont
145ba93bd3 Reorder initialization
Initialize components in the pipeline order: demuxer first, decoder and
recorder second.
2023-02-26 22:41:06 +01:00
Romain Vimont
57356a3a09 Refactor recorder logic
Process the initial config packet (necessary to write the header)
separately.
2023-02-26 22:41:06 +01:00
Romain Vimont
4b1f27bdee Move last packet recording
Write the last packet at the end.
2023-02-26 22:41:06 +01:00
Romain Vimont
e45c499358 Add start() function for recorder
For consistency with the other components, do not start the internal
thread from an init() function.
2023-02-26 22:41:06 +01:00
Romain Vimont
3f99f59394 Open recording file from the recorder thread
The recorder opened the target file from the packet sink open()
callback, called by the demuxer. Only then the recorder thread was
started.

One golden rule for the recorder is to never block the demuxer for I/O,
because it would impact mirroring. This rule is respected on recording
packets, but not for the initial recorder opening.

Therefore, start the recorder thread from sc_recorder_init(), open the
file immediately from the recorder thread, then make it wait for the
stream to start (on packet sink open()).

Now that the recorder can report errors directly (rather than making the
demuxer call fail), it is possible to report file opening error even
before the packet sink is open.
2023-02-26 22:41:06 +01:00
Romain Vimont
c976698e40 Inline packet_sink impl in recorder
Remove useless wrappers.
2023-02-26 22:41:06 +01:00
Romain Vimont
3bf4712cef Initialize recorder fields from init()
The recorder has two initialization phases: one to initialize the
concrete recorder object, and one to open its packet_sink trait.

Initialize mutex and condvar as part of the object initialization.

If there were several packet_sink traits, the mutex and condvar would
still be initialized only once.
2023-02-26 22:41:06 +01:00
Romain Vimont
31ffb6d33d Report recorder errors
Stop scrcpy on recorder errors.

It was previously indirectly stopped by the demuxer, which failed to
push packets to a recorder in error. Report it directly instead:
 - it avoids to wait for the next demuxer call;
 - it will allow to open the target file from a separate thread and stop
   immediately on any I/O error.
2023-02-26 22:41:06 +01:00
Romain Vimont
9acd03b5c3 Move previous packet to a local variable
It is only used from run_recorder().
2023-02-26 22:41:06 +01:00
Romain Vimont
cdc4b47ea2 Move pts_origin to a local variable
It is only used from run_recorder().
2023-02-26 22:41:06 +01:00
Romain Vimont
f0fce4125e Change PTS origin type from uint64_t to int64_t
It is initialized from AVPacket.pts, which is an int64_t.
2023-02-26 22:41:06 +01:00
Romain Vimont
6630c6dbb4 Fix --encoder documentation
Mention that it depends on the codec provided by --codec (which is not
necessarily H264 anymore).
2023-02-26 22:41:06 +01:00
Romain Vimont
82c7752cc4 Do not print stacktraces when unnecessary
User-friendly error messages are printed on specific configuration
exceptions. In that case, do not print the stacktrace.

Also handle the user-friendly error message directly where the error
occurs, and print multiline messages in a single log call, to avoid
confusing interleaving.
2023-02-26 22:41:06 +01:00
Romain Vimont
d2dd3bf434 Fix --no-clipboard-autosync bash completion
Fix typo.
2023-02-26 22:41:06 +01:00
Romain Vimont
061dae3790 Split server stop() and join()
For consistency with the other components, call stop() and join()
separately.

This allows to stop all components, then join them all.
2023-02-26 22:41:06 +01:00
Romain Vimont
52eeb197b3 Print FFmpeg logs
FFmpeg logs are redirected to a specific SDL log category.

Initialize the log level for this category to print them as expected.
2023-02-26 22:41:06 +01:00
Romain Vimont
693570fef1 Move FFmpeg callback initialization
Configure FFmpeg log redirection on start from a log helper.
2023-02-26 22:41:06 +01:00
Romain Vimont
346145f4bd Silence lint warning about constant in API 29
MediaFormat.MIMETYPE_VIDEO_AV1 has been added in API 29, but it is not
a problem to inline the constant in older versions.
2023-02-26 22:41:06 +01:00
Romain Vimont
d6aff0e5d7 Remove manifest package name
As reported by gradle:

> Setting the namespace via a source AndroidManifest.xml's package
> attribute is deprecated.
>
> Please instead set the namespace (or testNamespace) in the module's
> build.gradle file, as described here:
> https://developer.android.com/studio/build/configure-app-module#set-namespace
2023-02-26 22:41:06 +01:00
Romain Vimont
79d127b5f1 Upgrade gradle build tools to 7.4.0
Plugin version 7.4.0.
Gradle version 7.5.

Refs <https://developer.android.com/studio/releases/gradle-plugin#updating-gradle>
2023-02-26 22:11:37 +01:00
128 changed files with 2128 additions and 2513 deletions

View File

@@ -15,7 +15,7 @@ First, you need to install the required packages:
sudo apt install ffmpeg libsdl2-2.0-0 adb wget \
gcc git pkg-config meson ninja-build libsdl2-dev \
libavcodec-dev libavdevice-dev libavformat-dev libavutil-dev \
libswresample-dev libusb-1.0-0 libusb-1.0-0-dev
libusb-1.0-0 libusb-1.0-0-dev
```
Then clone the repo and execute the installation script
@@ -94,7 +94,7 @@ sudo apt install ffmpeg libsdl2-2.0-0 adb libusb-1.0-0
# client build dependencies
sudo apt install gcc git pkg-config meson ninja-build libsdl2-dev \
libavcodec-dev libavdevice-dev libavformat-dev libavutil-dev \
libswresample-dev libusb-1.0-0-dev
libusb-1.0-0-dev
# server build dependencies
sudo apt install openjdk-11-jdk

View File

@@ -718,7 +718,7 @@ scrcpy --display=1
The list of display ids can be retrieved by:
```bash
scrcpy --list-displays
adb shell dumpsys display # search "mDisplayId=" in the output
```
The secondary display may only be controlled if the device runs at least Android

View File

@@ -45,7 +45,6 @@ _scrcpy() {
-r --record=
--record-format=
--render-driver=
--require-audio
--rotation=
-s --serial=
--shortcut-mod=
@@ -78,7 +77,7 @@ _scrcpy() {
return
;;
--audio-codec)
COMPREPLY=($(compgen -W 'opus aac raw' -- "$cur"))
COMPREPLY=($(compgen -W 'opus aac' -- "$cur"))
return
;;
--lock-video-orientation)

View File

@@ -10,7 +10,7 @@ local arguments
arguments=(
'--always-on-top[Make scrcpy window always on top \(above other windows\)]'
'--audio-bit-rate=[Encode the audio at the given bit-rate]'
'--audio-codec=[Select the audio codec]:codec:(opus aac raw)'
'--audio-codec=[Select the audio codec]:codec:(opus aac)'
'--audio-codec-options=[Set a list of comma-separated key\:type=value options for the device audio encoder]'
'--audio-encoder=[Use a specific MediaCodec audio encoder]'
{-b,--video-bit-rate=}'[Encode the video at the given bit-rate]'
@@ -51,7 +51,6 @@ arguments=(
{-r,--record=}'[Record screen to file]:record file:_files'
'--record-format=[Force recording format]:format:(mp4 mkv)'
'--render-driver=[Request SDL to use the given render driver]:driver name:(direct3d opengl opengles2 opengles metal software)'
'--require-audio=[Make scrcpy fail if audio is enabled but does not work]'
'--rotation=[Set the initial display rotation]:rotation values:(0 1 2 3)'
{-s,--serial=}'[The device serial number \(mandatory for multiple devices only\)]:serial:($("${ADB-adb}" devices | awk '\''$2 == "device" {print $1}'\''))'
'--shortcut-mod=[\[key1,key2+key3,...\] Specify the modifiers to use for scrcpy shortcuts]:shortcut mod:(lctrl rctrl lalt ralt lsuper rsuper)'

View File

@@ -11,7 +11,6 @@ src = [
'src/control_msg.c',
'src/controller.c',
'src/decoder.c',
'src/delay_buffer.c',
'src/demuxer.c',
'src/device_msg.c',
'src/icon.c',
@@ -30,8 +29,7 @@ src = [
'src/screen.c',
'src/server.c',
'src/version.c',
'src/trait/frame_source.c',
'src/trait/packet_source.c',
'src/video_buffer.c',
'src/util/acksync.c',
'src/util/average.c',
'src/util/bytebuf.c',
@@ -39,7 +37,6 @@ src = [
'src/util/intmap.c',
'src/util/intr.c',
'src/util/log.c',
'src/util/memory.c',
'src/util/net.c',
'src/util/net_intr.c',
'src/util/process.c',
@@ -136,19 +133,24 @@ else
ffmpeg_bin_dir = meson.current_source_dir() + '/prebuilt-deps/data/' + prebuilt_ffmpeg + '/bin'
ffmpeg_include_dir = 'prebuilt-deps/data/' + prebuilt_ffmpeg + '/include'
# ffmpeg versions are different for win32 and win64 builds
ffmpeg_avcodec = meson.get_cross_property('ffmpeg_avcodec')
ffmpeg_avformat = meson.get_cross_property('ffmpeg_avformat')
ffmpeg_avutil = meson.get_cross_property('ffmpeg_avutil')
ffmpeg = declare_dependency(
dependencies: [
cc.find_library('avcodec-60', dirs: ffmpeg_bin_dir),
cc.find_library('avformat-60', dirs: ffmpeg_bin_dir),
cc.find_library('avutil-58', dirs: ffmpeg_bin_dir),
cc.find_library('swresample-4', dirs: ffmpeg_bin_dir),
cc.find_library(ffmpeg_avcodec, dirs: ffmpeg_bin_dir),
cc.find_library(ffmpeg_avformat, dirs: ffmpeg_bin_dir),
cc.find_library(ffmpeg_avutil, dirs: ffmpeg_bin_dir),
],
include_directories: include_directories(ffmpeg_include_dir)
)
prebuilt_libusb = meson.get_cross_property('prebuilt_libusb')
libusb_bin_dir = meson.current_source_dir() + '/prebuilt-deps/data/' + prebuilt_libusb + '/bin'
libusb_include_dir = 'prebuilt-deps/data/' + prebuilt_libusb + '/include'
prebuilt_libusb_root = meson.get_cross_property('prebuilt_libusb_root')
libusb_bin_dir = meson.current_source_dir() + '/prebuilt-deps/data/' + prebuilt_libusb
libusb_include_dir = 'prebuilt-deps/data/' + prebuilt_libusb_root + '/include'
libusb = declare_dependency(
dependencies: [
@@ -176,7 +178,6 @@ check_functions = [
'vasprintf',
'nrand48',
'jrand48',
'reallocarray',
]
foreach f : check_functions
@@ -267,6 +268,9 @@ if get_option('buildtype') == 'debug'
'tests/test_bytebuf.c',
'src/util/bytebuf.c',
]],
['test_cbuf', [
'tests/test_cbuf.c',
]],
['test_cli', [
'tests/test_cli.c',
'src/cli.c',
@@ -291,6 +295,9 @@ if get_option('buildtype') == 'debug'
'tests/test_device_msg_deserialize.c',
'src/device_msg.c',
]],
['test_queue', [
'tests/test_queue.c',
]],
['test_strbuf', [
'tests/test_strbuf.c',
'src/util/strbuf.c',
@@ -300,10 +307,6 @@ if get_option('buildtype') == 'debug'
'src/util/str.c',
'src/util/strbuf.c',
]],
['test_vecdeque', [
'tests/test_vecdeque.c',
'src/util/memory.c',
]],
['test_vector', [
'tests/test_vector.c',
]],

View File

@@ -0,0 +1,45 @@
#!/usr/bin/env bash
set -e
DIR=$(dirname ${BASH_SOURCE[0]})
cd "$DIR"
. common
mkdir -p "$PREBUILT_DATA_DIR"
cd "$PREBUILT_DATA_DIR"
DEP_DIR=ffmpeg-win32-4.3.1
FILENAME_SHARED=ffmpeg-4.3.1-win32-shared.zip
SHA256SUM_SHARED=357af9901a456f4dcbacd107e83a934d344c9cb07ddad8aaf80612eeab7d26d2
FILENAME_DEV=ffmpeg-4.3.1-win32-dev.zip
SHA256SUM_DEV=230efb08e9bcf225bd474da29676c70e591fc94d8790a740ca801408fddcb78b
if [[ -d "$DEP_DIR" ]]
then
echo "$DEP_DIR" found
exit 0
fi
get_file "https://github.com/Genymobile/scrcpy/releases/download/v1.16/$FILENAME_SHARED" \
"$FILENAME_SHARED" "$SHA256SUM_SHARED"
get_file "https://github.com/Genymobile/scrcpy/releases/download/v1.16/$FILENAME_DEV" \
"$FILENAME_DEV" "$SHA256SUM_DEV"
mkdir "$DEP_DIR"
cd "$DEP_DIR"
ZIP_PREFIX_SHARED=ffmpeg-4.3.1-win32-shared
unzip "../$FILENAME_SHARED" \
"$ZIP_PREFIX_SHARED"/bin/avutil-56.dll \
"$ZIP_PREFIX_SHARED"/bin/avcodec-58.dll \
"$ZIP_PREFIX_SHARED"/bin/avformat-58.dll \
"$ZIP_PREFIX_SHARED"/bin/swresample-3.dll \
"$ZIP_PREFIX_SHARED"/bin/swscale-5.dll
ZIP_PREFIX_DEV=ffmpeg-4.3.1-win32-dev
unzip "../$FILENAME_DEV" \
"$ZIP_PREFIX_DEV/include/*"
mv "$ZIP_PREFIX_SHARED"/* .
mv "$ZIP_PREFIX_DEV"/* .
rmdir "$ZIP_PREFIX_SHARED" "$ZIP_PREFIX_DEV"

View File

@@ -0,0 +1,36 @@
#!/usr/bin/env bash
set -e
DIR=$(dirname ${BASH_SOURCE[0]})
cd "$DIR"
. common
mkdir -p "$PREBUILT_DATA_DIR"
cd "$PREBUILT_DATA_DIR"
VERSION=5.1.2
DEP_DIR=ffmpeg-win64-$VERSION
FILENAME=ffmpeg-$VERSION-full_build-shared.7z
SHA256SUM=d9eb97b72d7cfdae4d0f7eaea59ccffb8c364d67d88018ea715d5e2e193f00e9
if [[ -d "$DEP_DIR" ]]
then
echo "$DEP_DIR" found
exit 0
fi
get_file "https://github.com/GyanD/codexffmpeg/releases/download/$VERSION/$FILENAME" \
"$FILENAME" "$SHA256SUM"
mkdir "$DEP_DIR"
cd "$DEP_DIR"
ZIP_PREFIX=ffmpeg-$VERSION-full_build-shared
7z x "../$FILENAME" \
"$ZIP_PREFIX"/bin/avutil-57.dll \
"$ZIP_PREFIX"/bin/avcodec-59.dll \
"$ZIP_PREFIX"/bin/avformat-59.dll \
"$ZIP_PREFIX"/bin/swresample-4.dll \
"$ZIP_PREFIX"/bin/swscale-6.dll \
"$ZIP_PREFIX"/include
mv "$ZIP_PREFIX"/* .
rmdir "$ZIP_PREFIX"

View File

@@ -1,30 +0,0 @@
#!/usr/bin/env bash
set -e
DIR=$(dirname ${BASH_SOURCE[0]})
cd "$DIR"
. common
mkdir -p "$PREBUILT_DATA_DIR"
cd "$PREBUILT_DATA_DIR"
VERSION=6.0-scrcpy-2
DEP_DIR="ffmpeg-$VERSION"
FILENAME="$DEP_DIR".7z
SHA256SUM=98ef97f8607c97a5c4f9c5a0a991b78f105d002a3619145011d16ffb92501b14
if [[ -d "$DEP_DIR" ]]
then
echo "$DEP_DIR" found
exit 0
fi
get_file "https://github.com/rom1v/scrcpy-deps/releases/download/$VERSION/$FILENAME" \
"$FILENAME" "$SHA256SUM"
mkdir "$DEP_DIR"
cd "$DEP_DIR"
ZIP_PREFIX=ffmpeg
7z x "../$FILENAME"
mv "$ZIP_PREFIX"/* .
rmdir "$ZIP_PREFIX"

View File

@@ -22,12 +22,13 @@ get_file "https://github.com/libusb/libusb/releases/download/v1.0.26/$FILENAME"
mkdir "$DEP_DIR"
cd "$DEP_DIR"
# include/ is the same in all folders of the archive
7z x "../$FILENAME" \
libusb-1.0.26-binaries/libusb-MinGW-Win32/bin/msys-usb-1.0.dll \
libusb-1.0.26-binaries/libusb-MinGW-Win32/include/ \
libusb-1.0.26-binaries/libusb-MinGW-x64/bin/msys-usb-1.0.dll \
libusb-1.0.26-binaries/libusb-MinGW-x64/include/
mv libusb-1.0.26-binaries/libusb-MinGW-Win32 .
mv libusb-1.0.26-binaries/libusb-MinGW-x64 .
mv libusb-1.0.26-binaries/libusb-MinGW-Win32/bin MinGW-Win32
mv libusb-1.0.26-binaries/libusb-MinGW-x64/bin MinGW-x64
mv libusb-1.0.26-binaries/libusb-MinGW-x64/include .
rm -rf libusb-1.0.26-binaries

View File

@@ -23,32 +23,14 @@ Make scrcpy window always on top (above other windows).
.BI "\-\-audio\-bit\-rate " value
Encode the audio at the given bit\-rate, expressed in bits/s. Unit suffixes are supported: '\fBK\fR' (x1000) and '\fBM\fR' (x1000000).
Default is 128K (128000).
.TP
.BI "\-\-audio\-buffer ms
Configure the audio buffering delay (in milliseconds).
Lower values decrease the latency, but increase the likelyhood of buffer underrun (causing audio glitches).
Default is 50.
Default is 196K (196000).
.TP
.BI "\-\-audio\-codec " name
Select an audio codec (opus, aac or raw).
Select an audio codec (opus or aac).
Default is opus.
.TP
.BI "\-\-audio\-codec\-options " key\fR[:\fItype\fR]=\fIvalue\fR[,...]
Set a list of comma-separated key:type=value options for the device audio encoder.
The possible values for 'type' are 'int' (default), 'long', 'float' and 'string'.
The list of possible codec options is available in the Android documentation
.UR https://d.android.com/reference/android/media/MediaFormat
.UE .
.TP
.BI "\-\-audio\-encoder " name
Use a specific MediaCodec audio encoder (depending on the codec provided by \fB\-\-audio\-codec\fR).
@@ -280,10 +262,6 @@ Supported names are currently "direct3d", "opengl", "opengles2", "opengles", "me
.UR https://wiki.libsdl.org/SDL_HINT_RENDER_DRIVER
.UE
.TP
.B \-\-require\-audio
By default, scrcpy mirrors only the video if audio capture fails on the device. This option makes scrcpy fail if audio is enabled but does not work.
.TP
.BI "\-\-rotation " value
Set the initial display rotation. Possibles values are 0, 1, 2 and 3. Each increment adds a 90 degrees rotation counterclockwise.

View File

@@ -12,20 +12,21 @@
#define SC_AV_SAMPLE_FMT AV_SAMPLE_FMT_FLT
#define SC_SDL_SAMPLE_FMT AUDIO_F32
#define SC_AUDIO_OUTPUT_BUFFER_SAMPLES 240 // 5ms at 48000Hz
#define SC_AUDIO_OUTPUT_BUFFER_SAMPLES 480 // 10ms at 48000Hz
static inline uint32_t
bytes_to_samples(struct sc_audio_player *ap, size_t bytes) {
assert(bytes % (ap->nb_channels * ap->out_bytes_per_sample) == 0);
return bytes / (ap->nb_channels * ap->out_bytes_per_sample);
}
// The target number of buffered samples between the producer and the consumer.
// This value is directly use for compensation.
#define SC_TARGET_BUFFERED_SAMPLES (3 * SC_AUDIO_OUTPUT_BUFFER_SAMPLES)
static inline size_t
samples_to_bytes(struct sc_audio_player *ap, uint32_t samples) {
return samples * ap->nb_channels * ap->out_bytes_per_sample;
}
// If the consumer is too late, skip samples to keep at most this value
#define SC_BUFFERED_SAMPLES_THRESHOLD 2400 // 50ms at 48000Hz
static void SDLCALL
// Use a ring-buffer of 1 second (at 48000Hz) between the producer and the
// consumer. It too big, but it guarantees that the producer and the consumer
// will be able to access it in parallel without locking.
#define SC_BYTEBUF_SIZE_IN_SAMPLES 48000
void
sc_audio_player_sdl_callback(void *userdata, uint8_t *stream, int len_int) {
struct sc_audio_player *ap = userdata;
@@ -36,58 +37,57 @@ sc_audio_player_sdl_callback(void *userdata, uint8_t *stream, int len_int) {
size_t len = len_int;
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGD("[Audio] SDL callback requests %" PRIu32 " samples",
bytes_to_samples(ap, len));
LOGD("[Audio] SDL callback requests %" SC_PRIsizet " samples",
len / (ap->nb_channels * ap->out_bytes_per_sample));
#endif
size_t read_avail = sc_bytebuf_read_available(&ap->buf);
if (!ap->played) {
uint32_t buffered_samples = bytes_to_samples(ap, read_avail);
// Part of the buffering is handled by inserting initial silence. The
// remaining (margin) last samples will be handled by compensation.
uint32_t margin = 30 * ap->sample_rate / 1000; // 30ms
if (buffered_samples + margin < ap->target_buffering) {
LOGV("[Audio] Inserting initial buffering silence: %" PRIu32
" samples", bytes_to_samples(ap, len));
// Delay playback starting to reach the target buffering. Fill the
// whole buffer with silence (len is small compared to the
// arbitrary margin value).
memset(stream, 0, len);
return;
}
size_t read = sc_bytebuf_read_remaining(&ap->buf);
size_t max_buffered_bytes = SC_BUFFERED_SAMPLES_THRESHOLD
* ap->nb_channels * ap->out_bytes_per_sample;
if (read > max_buffered_bytes + len) {
size_t skip = read - (max_buffered_bytes + len);
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGD("[Audio] Buffered samples threshold exceeded: %" SC_PRIsizet
" bytes, skipping %" SC_PRIsizet " bytes", read, skip);
#endif
// After this callback, exactly max_buffered_bytes will remain
sc_bytebuf_skip(&ap->buf, skip);
read = max_buffered_bytes + len;
}
size_t read = MIN(read_avail, len);
// Number of buffered samples (may be negative on underflow)
float buffered_samples = ((float) read - len_int)
/ (ap->nb_channels * ap->out_bytes_per_sample);
sc_average_push(&ap->avg_buffered_samples, buffered_samples);
if (read) {
if (read > len) {
read = len;
}
sc_bytebuf_read(&ap->buf, stream, read);
}
if (read < len) {
size_t silence_bytes = len - read;
uint32_t silence_samples = bytes_to_samples(ap, silence_bytes);
// Insert silence. In theory, the inserted silent samples replace the
// missing real samples, which will arrive later, so they should be
// dropped to keep the latency minimal. However, this would cause very
// audible glitches, so let the clock compensation restore the target
// latency.
LOGD("[Audio] Buffer underflow, inserting silence: %" PRIu32 " samples",
silence_samples);
memset(stream + read, 0, silence_bytes);
if (ap->received) {
// Inserting additional samples immediately increases buffering
ap->avg_buffering.avg += silence_samples;
}
// Insert silence
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGD("[Audio] Buffer underflow, inserting silence: %" SC_PRIsizet
" bytes", len - read);
#endif
memset(stream + read, 0, len - read);
}
}
ap->played = true;
static size_t
sc_audio_player_get_buf_size(struct sc_audio_player *ap, size_t samples) {
assert(ap->nb_channels);
assert(ap->out_bytes_per_sample);
return samples * ap->nb_channels * ap->out_bytes_per_sample;
}
static uint8_t *
sc_audio_player_get_swr_buf(struct sc_audio_player *ap, uint32_t min_samples) {
size_t min_buf_size = samples_to_bytes(ap, min_samples);
if (min_buf_size > ap->swr_buf_alloc_size) {
sc_audio_player_get_swr_buf(struct sc_audio_player *ap, size_t min_samples) {
size_t min_buf_size = sc_audio_player_get_buf_size(ap, min_samples);
if (min_buf_size < ap->swr_buf_alloc_size) {
size_t new_size = min_buf_size + 4096;
uint8_t *buf = realloc(ap->swr_buf, new_size);
if (!buf) {
@@ -102,203 +102,15 @@ sc_audio_player_get_swr_buf(struct sc_audio_player *ap, uint32_t min_samples) {
return ap->swr_buf;
}
static bool
sc_audio_player_frame_sink_push(struct sc_frame_sink *sink,
const AVFrame *frame) {
struct sc_audio_player *ap = DOWNCAST(sink);
SwrContext *swr_ctx = ap->swr_ctx;
int64_t swr_delay = swr_get_delay(swr_ctx, ap->sample_rate);
// No need to av_rescale_rnd(), input and output sample rates are the same.
// Add more space (256) for clock compensation.
int dst_nb_samples = swr_delay + frame->nb_samples + 256;
uint8_t *swr_buf = sc_audio_player_get_swr_buf(ap, dst_nb_samples);
if (!swr_buf) {
return false;
}
int ret = swr_convert(swr_ctx, &swr_buf, dst_nb_samples,
(const uint8_t **) frame->data, frame->nb_samples);
if (ret < 0) {
LOGE("Resampling failed: %d", ret);
return false;
}
// swr_convert() returns the number of samples which would have been
// written if the buffer was big enough.
uint32_t samples_written = MIN(ret, dst_nb_samples);
size_t swr_buf_size = samples_to_bytes(ap, samples_written);
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGD("[Audio] %" PRIu32 " samples written to buffer", samples_written);
#endif
// Since this function is the only writer, the current available space is
// at least the previous available space. In practice, it should almost
// always be possible to write without lock.
bool lockless_write = swr_buf_size <= ap->previous_write_avail;
if (lockless_write) {
sc_bytebuf_prepare_write(&ap->buf, swr_buf, swr_buf_size);
}
SDL_LockAudioDevice(ap->device);
size_t read_avail = sc_bytebuf_read_available(&ap->buf);
uint32_t buffered_samples = bytes_to_samples(ap, read_avail);
if (lockless_write) {
sc_bytebuf_commit_write(&ap->buf, swr_buf_size);
} else {
// Take care to keep full samples
size_t align = ap->nb_channels * ap->out_bytes_per_sample;
size_t write_avail =
sc_bytebuf_write_available(&ap->buf) / align * align;
if (swr_buf_size > write_avail) {
// Entering this branch is very unlikely, the ring-buffer (bytebuf)
// is allocated with a size sufficient to store 1 second more than
// the target buffering. If this happens, though, we have to skip
// old samples.
size_t cap = sc_bytebuf_capacity(&ap->buf) / align * align;
if (swr_buf_size > cap) {
// Very very unlikely: a single resampled frame should never
// exceed the ring-buffer size (or something is very wrong).
// Ignore the first bytes in swr_buf
swr_buf += swr_buf_size - cap;
swr_buf_size = cap;
// This change in samples_written will impact the
// instant_compensation below
samples_written -= bytes_to_samples(ap, swr_buf_size - cap);
}
assert(swr_buf_size >= write_avail);
if (swr_buf_size > write_avail) {
sc_bytebuf_skip(&ap->buf, swr_buf_size - write_avail);
uint32_t skip_samples =
bytes_to_samples(ap, swr_buf_size - write_avail);
assert(buffered_samples >= skip_samples);
buffered_samples -= skip_samples;
if (ap->played) {
// Dropping input samples instantly decreases buffering
ap->avg_buffering.avg -= skip_samples;
}
}
// It should remain exactly the expected size to write the new
// samples.
assert((sc_bytebuf_write_available(&ap->buf) / align * align)
== swr_buf_size);
}
sc_bytebuf_write(&ap->buf, swr_buf, swr_buf_size);
}
buffered_samples += samples_written;
assert(samples_to_bytes(ap, buffered_samples)
== sc_bytebuf_read_available(&ap->buf));
// Read with lock held, to be used after unlocking
bool played = ap->played;
if (played) {
uint32_t max_buffered_samples = ap->target_buffering
+ 12 * SC_AUDIO_OUTPUT_BUFFER_SAMPLES
+ ap->target_buffering / 10;
if (buffered_samples > max_buffered_samples) {
uint32_t skip_samples = buffered_samples - max_buffered_samples;
size_t skip_bytes = samples_to_bytes(ap, skip_samples);
sc_bytebuf_skip(&ap->buf, skip_bytes);
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGD("[Audio] Buffering threshold exceeded, skipping %" PRIu32
" samples", skip_samples);
#endif
}
// Number of samples added (or removed, if negative) for compensation
int32_t instant_compensation =
(int32_t) samples_written - frame->nb_samples;
// The compensation must apply instantly, it must not be smoothed
ap->avg_buffering.avg += instant_compensation;
// However, the buffering level must be smoothed
sc_average_push(&ap->avg_buffering, buffered_samples);
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGD("[Audio] buffered_samples=%" PRIu32 " avg_buffering=%f",
buffered_samples, sc_average_get(&ap->avg_buffering));
#endif
} else {
// SDL playback not started yet, do not accumulate more than
// max_initial_buffering samples, this would cause unnecessary delay
// (and glitches to compensate) on start.
uint32_t max_initial_buffering = ap->target_buffering
+ 2 * SC_AUDIO_OUTPUT_BUFFER_SAMPLES;
if (buffered_samples > max_initial_buffering) {
uint32_t skip_samples = buffered_samples - max_initial_buffering;
size_t skip_bytes = samples_to_bytes(ap, skip_samples);
sc_bytebuf_skip(&ap->buf, skip_bytes);
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGD("[Audio] Playback not started, skipping %" PRIu32 " samples",
skip_samples);
#endif
}
}
ap->previous_write_avail = sc_bytebuf_write_available(&ap->buf);
ap->received = true;
SDL_UnlockAudioDevice(ap->device);
if (played) {
ap->samples_since_resync += samples_written;
if (ap->samples_since_resync >= ap->sample_rate) {
// Recompute compensation every second
ap->samples_since_resync = 0;
float avg = sc_average_get(&ap->avg_buffering);
int diff = ap->target_buffering - avg;
if (diff < 0 && buffered_samples < ap->target_buffering) {
// Do not accelerate if the instant buffering level is below
// the average, this would increase underflow
diff = 0;
}
// Compensate the diff over 4 seconds (but will be recomputed after
// 1 second)
int distance = 4 * ap->sample_rate;
// Limit compensation rate to 2%
int abs_max_diff = distance / 50;
diff = CLAMP(diff, -abs_max_diff, abs_max_diff);
LOGV("[Audio] Buffering: target=%" PRIu32 " avg=%f cur=%" PRIu32
" compensation=%d", ap->target_buffering, avg,
buffered_samples, diff);
int ret = swr_set_compensation(swr_ctx, diff, distance);
if (ret < 0) {
LOGW("Resampling compensation failed: %d", ret);
// not fatal
}
}
}
return true;
}
static bool
sc_audio_player_frame_sink_open(struct sc_frame_sink *sink,
const AVCodecContext *ctx) {
struct sc_audio_player *ap = DOWNCAST(sink);
#ifdef SCRCPY_LAVU_HAS_CHLAYOUT
assert(ctx->ch_layout.nb_channels > 0);
unsigned nb_channels = ctx->ch_layout.nb_channels;
#else
int tmp = av_get_channel_layout_nb_channels(ctx->channel_layout);
assert(tmp > 0);
unsigned nb_channels = tmp;
#endif
SDL_AudioSpec desired = {
.freq = ctx->sample_rate,
.format = SC_SDL_SAMPLE_FMT,
.channels = nb_channels,
.channels = ctx->ch_layout.nb_channels,
.samples = SC_AUDIO_OUTPUT_BUFFER_SAMPLES,
.callback = sc_audio_player_sdl_callback,
.userdata = ap,
@@ -319,24 +131,17 @@ sc_audio_player_frame_sink_open(struct sc_frame_sink *sink,
ap->swr_ctx = swr_ctx;
assert(ctx->sample_rate > 0);
assert(ctx->ch_layout.nb_channels > 0);
assert(!av_sample_fmt_is_planar(SC_AV_SAMPLE_FMT));
int out_bytes_per_sample = av_get_bytes_per_sample(SC_AV_SAMPLE_FMT);
assert(out_bytes_per_sample > 0);
#ifdef SCRCPY_LAVU_HAS_CHLAYOUT
av_opt_set_chlayout(swr_ctx, "in_chlayout", &ctx->ch_layout, 0);
av_opt_set_chlayout(swr_ctx, "out_chlayout", &ctx->ch_layout, 0);
#else
av_opt_set_channel_layout(swr_ctx, "in_channel_layout",
ctx->channel_layout, 0);
av_opt_set_channel_layout(swr_ctx, "out_channel_layout",
ctx->channel_layout, 0);
#endif
av_opt_set_int(swr_ctx, "in_sample_rate", ctx->sample_rate, 0);
av_opt_set_int(swr_ctx, "out_sample_rate", ctx->sample_rate, 0);
av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", ctx->sample_fmt, 0);
av_opt_set_chlayout(swr_ctx, "out_chlayout", &ctx->ch_layout, 0);
av_opt_set_int(swr_ctx, "out_sample_rate", ctx->sample_rate, 0);
av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt", SC_AV_SAMPLE_FMT, 0);
int ret = swr_init(swr_ctx);
@@ -346,25 +151,20 @@ sc_audio_player_frame_sink_open(struct sc_frame_sink *sink,
}
ap->sample_rate = ctx->sample_rate;
ap->nb_channels = nb_channels;
ap->nb_channels = ctx->ch_layout.nb_channels;
ap->out_bytes_per_sample = out_bytes_per_sample;
ap->target_buffering = ap->target_buffering_delay * ap->sample_rate
/ SC_TICK_FREQ;
// Use a ring-buffer of the target buffering size plus 1 second between the
// producer and the consumer. It's too big on purpose, to guarantee that
// the producer and the consumer will be able to access it in parallel
// without locking.
size_t bytebuf_samples = ap->target_buffering + ap->sample_rate;
size_t bytebuf_size = samples_to_bytes(ap, bytebuf_samples);
size_t bytebuf_size =
sc_audio_player_get_buf_size(ap, SC_BYTEBUF_SIZE_IN_SAMPLES);
bool ok = sc_bytebuf_init(&ap->buf, bytebuf_size);
if (!ok) {
goto error_free_swr_ctx;
}
size_t initial_swr_buf_size = samples_to_bytes(ap, 4096);
ap->safe_empty_buffer = sc_bytebuf_write_remaining(&ap->buf);
size_t initial_swr_buf_size = sc_audio_player_get_buf_size(ap, 4096);
ap->swr_buf = malloc(initial_swr_buf_size);
if (!ap->swr_buf) {
LOG_OOM();
@@ -372,24 +172,9 @@ sc_audio_player_frame_sink_open(struct sc_frame_sink *sink,
}
ap->swr_buf_alloc_size = initial_swr_buf_size;
ap->previous_write_avail = sc_bytebuf_write_available(&ap->buf);
// Samples are produced and consumed by blocks, so the buffering must be
// smoothed to get a relatively stable value.
sc_average_init(&ap->avg_buffering, 32);
sc_average_init(&ap->avg_buffered_samples, 32);
ap->samples_since_resync = 0;
ap->received = false;
ap->played = false;
// The thread calling open() is the thread calling push(), which fills the
// audio buffer consumed by the SDL audio thread.
ok = sc_thread_set_priority(SC_THREAD_PRIORITY_TIME_CRITICAL);
if (!ok) {
ok = sc_thread_set_priority(SC_THREAD_PRIORITY_HIGH);
(void) ok; // We don't care if it worked, at least we tried
}
SDL_PauseAudioDevice(ap->device, 0);
return true;
@@ -417,10 +202,82 @@ sc_audio_player_frame_sink_close(struct sc_frame_sink *sink) {
swr_free(&ap->swr_ctx);
}
void
sc_audio_player_init(struct sc_audio_player *ap, sc_tick target_buffering) {
ap->target_buffering_delay = target_buffering;
static bool
sc_audio_player_frame_sink_push(struct sc_frame_sink *sink, const AVFrame *frame) {
struct sc_audio_player *ap = DOWNCAST(sink);
SwrContext *swr_ctx = ap->swr_ctx;
int64_t delay = swr_get_delay(swr_ctx, ap->sample_rate);
// No need to av_rescale_rnd(), input and output sample rates are the same
int dst_nb_samples = delay + frame->nb_samples;
uint8_t *swr_buf = sc_audio_player_get_swr_buf(ap, frame->nb_samples);
if (!swr_buf) {
return false;
}
int ret = swr_convert(swr_ctx, &swr_buf, dst_nb_samples,
(const uint8_t **) frame->data, frame->nb_samples);
if (ret < 0) {
LOGE("Resampling failed: %d", ret);
return false;
}
size_t samples_written = ret;
size_t swr_buf_size = sc_audio_player_get_buf_size(ap, samples_written);
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGI("[Audio] %" SC_PRIsizet " samples written to buffer", samples_written);
#endif
// It should almost always be possible to write without lock
bool can_write_without_lock = swr_buf_size <= ap->safe_empty_buffer;
if (can_write_without_lock) {
sc_bytebuf_prepare_write(&ap->buf, swr_buf, swr_buf_size);
}
SDL_LockAudioDevice(ap->device);
if (can_write_without_lock) {
sc_bytebuf_commit_write(&ap->buf, swr_buf_size);
} else {
sc_bytebuf_write(&ap->buf, swr_buf, swr_buf_size);
}
// The next time, it will remain at least the current empty space
ap->safe_empty_buffer = sc_bytebuf_write_remaining(&ap->buf);
// Read the value written by the SDL thread under lock
float avg;
bool has_avg = sc_average_get(&ap->avg_buffered_samples, &avg);
SDL_UnlockAudioDevice(ap->device);
if (has_avg) {
ap->samples_since_resync += samples_written;
if (ap->samples_since_resync >= ap->sample_rate) {
// Resync every second
ap->samples_since_resync = 0;
int diff = SC_TARGET_BUFFERED_SAMPLES - avg;
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGI("[Audio] Average buffered samples = %f, compensation %d",
avg, diff);
#endif
// Compensate the diff over 3 seconds (but will be recomputed after
// 1 second)
int ret = swr_set_compensation(swr_ctx, diff, 3 * ap->sample_rate);
if (ret < 0) {
LOGW("Resampling compensation failed: %d", ret);
// not fatal
}
}
}
return true;
}
void
sc_audio_player_init(struct sc_audio_player *ap) {
static const struct sc_frame_sink_ops ops = {
.open = sc_audio_player_frame_sink_open,
.close = sc_audio_player_frame_sink_close,

View File

@@ -8,7 +8,6 @@
#include <util/average.h>
#include <util/bytebuf.h>
#include <util/thread.h>
#include <util/tick.h>
#include <libavformat/avformat.h>
#include <libswresample/swresample.h>
@@ -19,50 +18,27 @@ struct sc_audio_player {
SDL_AudioDeviceID device;
// The target buffering between the producer and the consumer. This value
// is directly use for compensation.
// Since audio capture and/or encoding on the device typically produce
// blocks of 960 samples (20ms) or 1024 samples (~21.3ms), this target
// value should be higher.
sc_tick target_buffering_delay;
uint32_t target_buffering; // in samples
// Audio buffer to communicate between the receiver and the SDL audio
// callback (protected by SDL_AudioDeviceLock())
// protected by SDL_AudioDeviceLock()
struct sc_bytebuf buf;
// Number of bytes which could be written without locking
size_t safe_empty_buffer;
// The previous number of bytes available in the buffer (only used by the
// receiver thread)
size_t previous_write_avail;
// Resampler (only used from the receiver thread)
struct SwrContext *swr_ctx;
// The sample rate is the same for input and output
unsigned sample_rate;
// The number of channels is the same for input and output
unsigned nb_channels;
// The number of bytes per sample for a single channel
unsigned out_bytes_per_sample;
// Target buffer for resampling (only used by the receiver thread)
// Target buffer for resampling
uint8_t *swr_buf;
size_t swr_buf_alloc_size;
// Number of buffered samples (may be negative on underflow) (only used by
// the receiver thread)
struct sc_average avg_buffering;
// Count the number of samples to trigger a compensation update regularly
// (only used by the receiver thread)
uint32_t samples_since_resync;
// Set to true the first time a sample is received (protected by
// SDL_AudioDeviceLock())
bool received;
// Set to true the first time the SDL callback is called (protected by
// SDL_AudioDeviceLock())
bool played;
// Number of buffered samples (may be negative on underflow)
struct sc_average avg_buffered_samples;
unsigned samples_since_resync;
const struct sc_audio_player_callbacks *cbs;
void *cbs_userdata;
@@ -73,6 +49,6 @@ struct sc_audio_player_callbacks {
};
void
sc_audio_player_init(struct sc_audio_player *ap, sc_tick target_buffering);
sc_audio_player_init(struct sc_audio_player *ap);
#endif

View File

@@ -19,7 +19,6 @@
enum {
OPT_RENDER_EXPIRED_FRAMES = 1000,
OPT_BIT_RATE,
OPT_WINDOW_TITLE,
OPT_PUSH_TARGET,
OPT_ALWAYS_ON_TOP,
@@ -70,8 +69,6 @@ enum {
OPT_AUDIO_ENCODER,
OPT_LIST_ENCODERS,
OPT_LIST_DISPLAYS,
OPT_REQUIRE_AUDIO,
OPT_AUDIO_BUFFER,
};
struct sc_option {
@@ -119,22 +116,13 @@ static const struct sc_option options[] = {
.argdesc = "value",
.text = "Encode the audio at the given bit-rate, expressed in bits/s. "
"Unit suffixes are supported: 'K' (x1000) and 'M' (x1000000).\n"
"Default is 128K (128000).",
},
{
.longopt_id = OPT_AUDIO_BUFFER,
.longopt = "audio-buffer",
.argdesc = "ms",
.text = "Configure the audio buffering delay (in milliseconds).\n"
"Lower values decrease the latency, but increase the "
"likelyhood of buffer underrun (causing audio glitches).\n"
"Default is 50.",
"Default is 196K (196000).",
},
{
.longopt_id = OPT_AUDIO_CODEC,
.longopt = "audio-codec",
.argdesc = "name",
.text = "Select an audio codec (opus, aac or raw).\n"
.text = "Select an audio codec (opus or aac).\n"
"Default is opus.",
},
{
@@ -165,12 +153,6 @@ static const struct sc_option options[] = {
"Unit suffixes are supported: 'K' (x1000) and 'M' (x1000000).\n"
"Default is 8M (8000000).",
},
{
// deprecated
.longopt_id = OPT_BIT_RATE,
.longopt = "bit-rate",
.argdesc = "value",
},
{
// Not really deprecated (--codec has never been released), but without
// declaring an explicit --codec option, getopt_long() partial matching
@@ -476,13 +458,6 @@ static const struct sc_option options[] = {
.longopt_id = OPT_RENDER_EXPIRED_FRAMES,
.longopt = "render-expired-frames",
},
{
.longopt_id = OPT_REQUIRE_AUDIO,
.longopt = "require-audio",
.text = "By default, scrcpy mirrors only the video when audio capture "
"fails on the device. This option makes scrcpy fail if audio "
"is enabled but does not work."
},
{
.longopt_id = OPT_ROTATION,
.longopt = "rotation",
@@ -1522,11 +1497,7 @@ parse_audio_codec(const char *optarg, enum sc_codec *codec) {
*codec = SC_CODEC_AAC;
return true;
}
if (!strcmp(optarg, "raw")) {
*codec = SC_CODEC_RAW;
return true;
}
LOGE("Unsupported audio codec: %s (expected opus, aac or raw)", optarg);
LOGE("Unsupported audio codec: %s (expected opus)", optarg);
return false;
}
@@ -1540,9 +1511,6 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
int c;
while ((c = getopt_long(argc, argv, optstring, longopts, NULL)) != -1) {
switch (c) {
case OPT_BIT_RATE:
LOGW("--bit-rate is deprecated, use --video-bit-rate instead.");
// fall through
case 'b':
if (!parse_bit_rate(optarg, &opts->video_bit_rate)) {
return false;
@@ -1833,14 +1801,6 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
case OPT_LIST_DISPLAYS:
opts->list_displays = true;
break;
case OPT_REQUIRE_AUDIO:
opts->require_audio = true;
break;
case OPT_AUDIO_BUFFER:
if (!parse_buffering_time(optarg, &opts->audio_buffer)) {
return false;
}
break;
default:
// getopt prints the error message on stderr
return false;
@@ -1901,11 +1861,6 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
}
#endif
if (opts->audio && !opts->display && !opts->record_filename) {
LOGI("No display and no recording: audio disabled");
opts->audio = false;
}
if ((opts->tunnel_host || opts->tunnel_port) && !opts->force_adb_forward) {
LOGI("Tunnel host/port is set, "
"--force-adb-forward automatically enabled.");
@@ -1927,23 +1882,6 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
}
}
if (opts->record_filename && opts->audio_codec == SC_CODEC_RAW) {
LOGW("Recording does not support RAW audio codec");
return false;
}
if (opts->audio_codec == SC_CODEC_RAW) {
if (opts->audio_bit_rate) {
LOGW("--audio-bit-rate is ignored for raw audio codec");
}
if (opts->audio_codec_options) {
LOGW("--audio-codec-options is ignored for raw audio codec");
}
if (opts->audio_encoder) {
LOGW("--audio-encoder is ignored for raw audio codec");
}
}
if (!opts->control) {
if (opts->turn_screen_off) {
LOGE("Could not request to turn screen off if control is disabled");

View File

@@ -18,15 +18,7 @@ sc_clock_init(struct sc_clock *clock) {
static void
sc_clock_estimate(struct sc_clock *clock,
double *out_slope, sc_tick *out_offset) {
assert(clock->count);
if (clock->count == 1) {
// If there is only 1 point, we can't compute a slope. Assume it is 1.
struct sc_clock_point *single_point = &clock->right_sum;
*out_slope = 1;
*out_offset = single_point->system - single_point->stream;
return;
}
assert(clock->count > 1); // two points are necessary
struct sc_clock_point left_avg = {
.system = clock->left_sum.system / (clock->count / 2),
@@ -101,16 +93,19 @@ sc_clock_update(struct sc_clock *clock, sc_tick system, sc_tick stream) {
clock->head = (clock->head + 1) % SC_CLOCK_RANGE;
// Update estimation
sc_clock_estimate(clock, &clock->slope, &clock->offset);
if (clock->count > 1) {
// Update estimation
sc_clock_estimate(clock, &clock->slope, &clock->offset);
#ifndef SC_CLOCK_NDEBUG
LOGD("Clock estimation: %f * pts + %" PRItick, clock->slope, clock->offset);
LOGD("Clock estimation: %f * pts + %" PRItick,
clock->slope, clock->offset);
#endif
}
}
sc_tick
sc_clock_to_system_time(struct sc_clock *clock, sc_tick stream) {
assert(clock->count); // sc_clock_update() must have been called
assert(clock->count > 1); // sc_clock_update() must have been called
return (sc_tick) (stream * clock->slope) + clock->offset;
}

View File

@@ -5,8 +5,8 @@
#include "compat.h"
#define ARRAY_LEN(a) (sizeof(a) / sizeof(a[0]))
#define MIN(X,Y) ((X) < (Y) ? (X) : (Y))
#define MAX(X,Y) ((X) > (Y) ? (X) : (Y))
#define MIN(X,Y) (X) < (Y) ? (X) : (Y)
#define MAX(X,Y) (X) > (Y) ? (X) : (Y)
#define CLAMP(V,X,Y) MIN( MAX((V),(X)), (Y) )
#define container_of(ptr, type, member) \

View File

@@ -3,9 +3,6 @@
#include "config.h"
#include <assert.h>
#ifndef HAVE_REALLOCARRAY
# include <errno.h>
#endif
#include <stdlib.h>
#include <stdio.h>
#include <stdarg.h>
@@ -96,15 +93,5 @@ long jrand48(unsigned short xsubi[3]) {
return v.i;
}
#endif
#endif
#ifndef HAVE_REALLOCARRAY
void *reallocarray(void *ptr, size_t nmemb, size_t size) {
size_t bytes;
if (__builtin_mul_overflow(nmemb, size, &bytes)) {
errno = ENOMEM;
return NULL;
}
return realloc(ptr, bytes);
}
#endif

View File

@@ -37,13 +37,6 @@
# define SCRCPY_LAVF_HAS_AVFORMATCONTEXT_URL
#endif
// Not documented in ffmpeg/doc/APIchanges, but the channel_layout API
// has been replaced by chlayout in FFmpeg commit
// f423497b455da06c1337846902c770028760e094.
#if LIBAVUTIL_VERSION_INT >= AV_VERSION_INT(57, 23, 100)
# define SCRCPY_LAVU_HAS_CHLAYOUT
#endif
#if SDL_VERSION_ATLEAST(2, 0, 6)
// <https://github.com/libsdl-org/SDL/commit/d7a318de563125e5bb465b1000d6bc9576fbc6fc>
# define SCRCPY_SDL_HAS_HINT_TOUCH_MOUSE_EVENTS
@@ -54,10 +47,6 @@
# define SCRCPY_SDL_HAS_HINT_VIDEO_X11_NET_WM_BYPASS_COMPOSITOR
#endif
#if SDL_VERSION_ATLEAST(2, 0, 16)
# define SCRCPY_SDL_HAS_THREAD_PRIORITY_TIME_CRITICAL
#endif
#ifndef HAVE_STRDUP
char *strdup(const char *s);
#endif
@@ -78,8 +67,4 @@ long nrand48(unsigned short xsubi[3]);
long jrand48(unsigned short xsubi[3]);
#endif
#ifndef HAVE_REALLOCARRAY
void *reallocarray(void *ptr, size_t nmemb, size_t size);
#endif
#endif

View File

@@ -4,28 +4,19 @@
#include "util/log.h"
#define SC_CONTROL_MSG_QUEUE_MAX 64
bool
sc_controller_init(struct sc_controller *controller, sc_socket control_socket,
struct sc_acksync *acksync) {
sc_vecdeque_init(&controller->queue);
cbuf_init(&controller->queue);
bool ok = sc_vecdeque_reserve(&controller->queue, SC_CONTROL_MSG_QUEUE_MAX);
bool ok = sc_receiver_init(&controller->receiver, control_socket, acksync);
if (!ok) {
return false;
}
ok = sc_receiver_init(&controller->receiver, control_socket, acksync);
if (!ok) {
sc_vecdeque_destroy(&controller->queue);
return false;
}
ok = sc_mutex_init(&controller->mutex);
if (!ok) {
sc_receiver_destroy(&controller->receiver);
sc_vecdeque_destroy(&controller->queue);
return false;
}
@@ -33,7 +24,6 @@ sc_controller_init(struct sc_controller *controller, sc_socket control_socket,
if (!ok) {
sc_receiver_destroy(&controller->receiver);
sc_mutex_destroy(&controller->mutex);
sc_vecdeque_destroy(&controller->queue);
return false;
}
@@ -48,12 +38,10 @@ sc_controller_destroy(struct sc_controller *controller) {
sc_cond_destroy(&controller->msg_cond);
sc_mutex_destroy(&controller->mutex);
while (!sc_vecdeque_is_empty(&controller->queue)) {
struct sc_control_msg *msg = sc_vecdeque_popref(&controller->queue);
assert(msg);
sc_control_msg_destroy(msg);
struct sc_control_msg msg;
while (cbuf_take(&controller->queue, &msg)) {
sc_control_msg_destroy(&msg);
}
sc_vecdeque_destroy(&controller->queue);
sc_receiver_destroy(&controller->receiver);
}
@@ -66,19 +54,13 @@ sc_controller_push_msg(struct sc_controller *controller,
}
sc_mutex_lock(&controller->mutex);
bool full = sc_vecdeque_is_full(&controller->queue);
if (!full) {
bool was_empty = sc_vecdeque_is_empty(&controller->queue);
sc_vecdeque_push_noresize(&controller->queue, *msg);
if (was_empty) {
sc_cond_signal(&controller->msg_cond);
}
bool was_empty = cbuf_is_empty(&controller->queue);
bool res = cbuf_push(&controller->queue, *msg);
if (was_empty) {
sc_cond_signal(&controller->msg_cond);
}
// Otherwise (if the queue is full), the msg is discarded
sc_mutex_unlock(&controller->mutex);
return !full;
return res;
}
static bool
@@ -100,8 +82,7 @@ run_controller(void *data) {
for (;;) {
sc_mutex_lock(&controller->mutex);
while (!controller->stopped
&& sc_vecdeque_is_empty(&controller->queue)) {
while (!controller->stopped && cbuf_is_empty(&controller->queue)) {
sc_cond_wait(&controller->msg_cond, &controller->mutex);
}
if (controller->stopped) {
@@ -109,9 +90,10 @@ run_controller(void *data) {
sc_mutex_unlock(&controller->mutex);
break;
}
assert(!sc_vecdeque_is_empty(&controller->queue));
struct sc_control_msg msg = sc_vecdeque_pop(&controller->queue);
struct sc_control_msg msg;
bool non_empty = cbuf_take(&controller->queue, &msg);
assert(non_empty);
(void) non_empty;
sc_mutex_unlock(&controller->mutex);
bool ok = process_msg(controller, &msg);

View File

@@ -8,11 +8,11 @@
#include "control_msg.h"
#include "receiver.h"
#include "util/acksync.h"
#include "util/cbuf.h"
#include "util/net.h"
#include "util/thread.h"
#include "util/vecdeque.h"
struct sc_control_msg_queue SC_VECDEQUE(struct sc_control_msg);
struct sc_control_msg_queue CBUF(struct sc_control_msg, 64);
struct sc_controller {
sc_socket control_socket;

View File

@@ -5,12 +5,39 @@
#include <libavutil/channel_layout.h>
#include "events.h"
#include "video_buffer.h"
#include "trait/frame_sink.h"
#include "util/log.h"
/** Downcast packet_sink to decoder */
#define DOWNCAST(SINK) container_of(SINK, struct sc_decoder, packet_sink)
static void
sc_decoder_close_first_sinks(struct sc_decoder *decoder, unsigned count) {
while (count) {
struct sc_frame_sink *sink = decoder->sinks[--count];
sink->ops->close(sink);
}
}
static inline void
sc_decoder_close_sinks(struct sc_decoder *decoder) {
sc_decoder_close_first_sinks(decoder, decoder->sink_count);
}
static bool
sc_decoder_open_sinks(struct sc_decoder *decoder, const AVCodecContext *ctx) {
for (unsigned i = 0; i < decoder->sink_count; ++i) {
struct sc_frame_sink *sink = decoder->sinks[i];
if (!sink->ops->open(sink, ctx)) {
sc_decoder_close_first_sinks(decoder, i);
return false;
}
}
return true;
}
static bool
sc_decoder_open(struct sc_decoder *decoder, const AVCodec *codec) {
decoder->codec_ctx = avcodec_alloc_context3(codec);
@@ -26,13 +53,8 @@ sc_decoder_open(struct sc_decoder *decoder, const AVCodec *codec) {
decoder->codec_ctx->pix_fmt = AV_PIX_FMT_YUV420P;
} else {
// Hardcoded audio properties
#ifdef SCRCPY_LAVU_HAS_CHLAYOUT
decoder->codec_ctx->ch_layout =
(AVChannelLayout) AV_CHANNEL_LAYOUT_STEREO;
#else
decoder->codec_ctx->channel_layout = AV_CH_LAYOUT_STEREO;
decoder->codec_ctx->channels = 2;
#endif
decoder->codec_ctx->sample_rate = 48000;
}
@@ -50,8 +72,7 @@ sc_decoder_open(struct sc_decoder *decoder, const AVCodec *codec) {
return false;
}
if (!sc_frame_source_sinks_open(&decoder->frame_source,
decoder->codec_ctx)) {
if (!sc_decoder_open_sinks(decoder, decoder->codec_ctx)) {
av_frame_free(&decoder->frame);
avcodec_close(decoder->codec_ctx);
avcodec_free_context(&decoder->codec_ctx);
@@ -63,12 +84,24 @@ sc_decoder_open(struct sc_decoder *decoder, const AVCodec *codec) {
static void
sc_decoder_close(struct sc_decoder *decoder) {
sc_frame_source_sinks_close(&decoder->frame_source);
sc_decoder_close_sinks(decoder);
av_frame_free(&decoder->frame);
avcodec_close(decoder->codec_ctx);
avcodec_free_context(&decoder->codec_ctx);
}
static bool
push_frame_to_sinks(struct sc_decoder *decoder, const AVFrame *frame) {
for (unsigned i = 0; i < decoder->sink_count; ++i) {
struct sc_frame_sink *sink = decoder->sinks[i];
if (!sink->ops->push(sink, frame)) {
return false;
}
}
return true;
}
static bool
sc_decoder_push(struct sc_decoder *decoder, const AVPacket *packet) {
bool is_config = packet->pts == AV_NOPTS_VALUE;
@@ -83,29 +116,20 @@ sc_decoder_push(struct sc_decoder *decoder, const AVPacket *packet) {
decoder->name, ret);
return false;
}
for (;;) {
ret = avcodec_receive_frame(decoder->codec_ctx, decoder->frame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
break;
}
if (ret) {
LOGE("Decoder '%s', could not receive video frame: %d",
decoder->name, ret);
return false;
}
ret = avcodec_receive_frame(decoder->codec_ctx, decoder->frame);
if (!ret) {
// a frame was received
bool ok = sc_frame_source_sinks_push(&decoder->frame_source,
decoder->frame);
av_frame_unref(decoder->frame);
if (!ok) {
// Error already logged
return false;
}
}
bool ok = push_frame_to_sinks(decoder, decoder->frame);
// A frame lost should not make the whole pipeline fail. The error, if
// any, is already logged.
(void) ok;
av_frame_unref(decoder->frame);
} else if (ret != AVERROR(EAGAIN)) {
LOGE("Decoder '%s', could not receive video frame: %d",
decoder->name, ret);
return false;
}
return true;
}
@@ -131,7 +155,7 @@ sc_decoder_packet_sink_push(struct sc_packet_sink *sink,
void
sc_decoder_init(struct sc_decoder *decoder, const char *name) {
decoder->name = name; // statically allocated
sc_frame_source_init(&decoder->frame_source);
decoder->sink_count = 0;
static const struct sc_packet_sink_ops ops = {
.open = sc_decoder_packet_sink_open,
@@ -141,3 +165,11 @@ sc_decoder_init(struct sc_decoder *decoder, const char *name) {
decoder->packet_sink.ops = &ops;
}
void
sc_decoder_add_sink(struct sc_decoder *decoder, struct sc_frame_sink *sink) {
assert(decoder->sink_count < SC_DECODER_MAX_SINKS);
assert(sink);
assert(sink->ops);
decoder->sinks[decoder->sink_count++] = sink;
}

View File

@@ -3,19 +3,22 @@
#include "common.h"
#include "trait/frame_source.h"
#include "trait/packet_sink.h"
#include <stdbool.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#define SC_DECODER_MAX_SINKS 2
struct sc_decoder {
struct sc_packet_sink packet_sink; // packet sink trait
struct sc_frame_source frame_source; // frame source trait
const char *name; // must be statically allocated (e.g. a string literal)
struct sc_frame_sink *sinks[SC_DECODER_MAX_SINKS];
unsigned sink_count;
AVCodecContext *codec_ctx;
AVFrame *frame;
};
@@ -24,4 +27,7 @@ struct sc_decoder {
void
sc_decoder_init(struct sc_decoder *decoder, const char *name);
void
sc_decoder_add_sink(struct sc_decoder *decoder, struct sc_frame_sink *sink);
#endif

View File

@@ -1,244 +0,0 @@
#include "delay_buffer.h"
#include <assert.h>
#include <stdlib.h>
#include <libavutil/avutil.h>
#include <libavformat/avformat.h>
#include "util/log.h"
#define SC_BUFFERING_NDEBUG // comment to debug
/** Downcast frame_sink to sc_delay_buffer */
#define DOWNCAST(SINK) container_of(SINK, struct sc_delay_buffer, frame_sink)
static bool
sc_delayed_frame_init(struct sc_delayed_frame *dframe, const AVFrame *frame) {
dframe->frame = av_frame_alloc();
if (!dframe->frame) {
LOG_OOM();
return false;
}
if (av_frame_ref(dframe->frame, frame)) {
LOG_OOM();
av_frame_free(&dframe->frame);
return false;
}
return true;
}
static void
sc_delayed_frame_destroy(struct sc_delayed_frame *dframe) {
av_frame_unref(dframe->frame);
av_frame_free(&dframe->frame);
}
static int
run_buffering(void *data) {
struct sc_delay_buffer *db = data;
assert(db->delay > 0);
for (;;) {
sc_mutex_lock(&db->mutex);
while (!db->stopped && sc_vecdeque_is_empty(&db->queue)) {
sc_cond_wait(&db->queue_cond, &db->mutex);
}
if (db->stopped) {
sc_mutex_unlock(&db->mutex);
goto stopped;
}
struct sc_delayed_frame dframe = sc_vecdeque_pop(&db->queue);
sc_tick max_deadline = sc_tick_now() + db->delay;
// PTS (written by the server) are expressed in microseconds
sc_tick pts = SC_TICK_FROM_US(dframe.frame->pts);
bool timed_out = false;
while (!db->stopped && !timed_out) {
sc_tick deadline = sc_clock_to_system_time(&db->clock, pts)
+ db->delay;
if (deadline > max_deadline) {
deadline = max_deadline;
}
timed_out =
!sc_cond_timedwait(&db->wait_cond, &db->mutex, deadline);
}
bool stopped = db->stopped;
sc_mutex_unlock(&db->mutex);
if (stopped) {
sc_delayed_frame_destroy(&dframe);
goto stopped;
}
#ifndef SC_BUFFERING_NDEBUG
LOGD("Buffering: %" PRItick ";%" PRItick ";%" PRItick,
pts, dframe.push_date, sc_tick_now());
#endif
bool ok = sc_frame_source_sinks_push(&db->frame_source, dframe.frame);
sc_delayed_frame_destroy(&dframe);
if (!ok) {
LOGE("Delayed frame could not be pushed, stopping");
sc_mutex_lock(&db->mutex);
// Prevent to push any new frame
db->stopped = true;
sc_mutex_unlock(&db->mutex);
goto stopped;
}
}
stopped:
assert(db->stopped);
// Flush queue
while (!sc_vecdeque_is_empty(&db->queue)) {
struct sc_delayed_frame *dframe = sc_vecdeque_popref(&db->queue);
sc_delayed_frame_destroy(dframe);
}
LOGD("Buffering thread ended");
return 0;
}
static bool
sc_delay_buffer_frame_sink_open(struct sc_frame_sink *sink,
const AVCodecContext *ctx) {
struct sc_delay_buffer *db = DOWNCAST(sink);
(void) ctx;
bool ok = sc_mutex_init(&db->mutex);
if (!ok) {
return false;
}
ok = sc_cond_init(&db->queue_cond);
if (!ok) {
goto error_destroy_mutex;
}
ok = sc_cond_init(&db->wait_cond);
if (!ok) {
goto error_destroy_queue_cond;
}
sc_clock_init(&db->clock);
sc_vecdeque_init(&db->queue);
if (!sc_frame_source_sinks_open(&db->frame_source, ctx)) {
goto error_destroy_wait_cond;
}
ok = sc_thread_create(&db->thread, run_buffering, "scrcpy-dbuf", db);
if (!ok) {
LOGE("Could not start buffering thread");
goto error_close_sinks;
}
return true;
error_close_sinks:
sc_frame_source_sinks_close(&db->frame_source);
error_destroy_wait_cond:
sc_cond_destroy(&db->wait_cond);
error_destroy_queue_cond:
sc_cond_destroy(&db->queue_cond);
error_destroy_mutex:
sc_mutex_destroy(&db->mutex);
return false;
}
static void
sc_delay_buffer_frame_sink_close(struct sc_frame_sink *sink) {
struct sc_delay_buffer *db = DOWNCAST(sink);
sc_mutex_lock(&db->mutex);
db->stopped = true;
sc_cond_signal(&db->queue_cond);
sc_cond_signal(&db->wait_cond);
sc_mutex_unlock(&db->mutex);
sc_thread_join(&db->thread, NULL);
sc_frame_source_sinks_close(&db->frame_source);
sc_cond_destroy(&db->wait_cond);
sc_cond_destroy(&db->queue_cond);
sc_mutex_destroy(&db->mutex);
}
static bool
sc_delay_buffer_frame_sink_push(struct sc_frame_sink *sink,
const AVFrame *frame) {
struct sc_delay_buffer *db = DOWNCAST(sink);
sc_mutex_lock(&db->mutex);
if (db->stopped) {
sc_mutex_unlock(&db->mutex);
return false;
}
sc_tick pts = SC_TICK_FROM_US(frame->pts);
sc_clock_update(&db->clock, sc_tick_now(), pts);
sc_cond_signal(&db->wait_cond);
if (db->first_frame_asap && db->clock.count == 1) {
sc_mutex_unlock(&db->mutex);
return sc_frame_source_sinks_push(&db->frame_source, frame);
}
struct sc_delayed_frame dframe;
bool ok = sc_delayed_frame_init(&dframe, frame);
if (!ok) {
sc_mutex_unlock(&db->mutex);
return false;
}
#ifndef SC_BUFFERING_NDEBUG
dframe.push_date = sc_tick_now();
#endif
ok = sc_vecdeque_push(&db->queue, dframe);
if (!ok) {
sc_mutex_unlock(&db->mutex);
LOG_OOM();
return false;
}
sc_cond_signal(&db->queue_cond);
sc_mutex_unlock(&db->mutex);
return true;
}
void
sc_delay_buffer_init(struct sc_delay_buffer *db, sc_tick delay,
bool first_frame_asap) {
assert(delay > 0);
db->delay = delay;
db->first_frame_asap = first_frame_asap;
sc_frame_source_init(&db->frame_source);
static const struct sc_frame_sink_ops ops = {
.open = sc_delay_buffer_frame_sink_open,
.close = sc_delay_buffer_frame_sink_close,
.push = sc_delay_buffer_frame_sink_push,
};
db->frame_sink.ops = &ops;
}

View File

@@ -1,60 +0,0 @@
#ifndef SC_DELAY_BUFFER_H
#define SC_DELAY_BUFFER_H
#include "common.h"
#include <stdbool.h>
#include "clock.h"
#include "trait/frame_source.h"
#include "trait/frame_sink.h"
#include "util/thread.h"
#include "util/tick.h"
#include "util/vecdeque.h"
// forward declarations
typedef struct AVFrame AVFrame;
struct sc_delayed_frame {
AVFrame *frame;
#ifndef NDEBUG
sc_tick push_date;
#endif
};
struct sc_delayed_frame_queue SC_VECDEQUE(struct sc_delayed_frame);
struct sc_delay_buffer {
struct sc_frame_source frame_source; // frame source trait
struct sc_frame_sink frame_sink; // frame sink trait
sc_tick delay;
bool first_frame_asap;
sc_thread thread;
sc_mutex mutex;
sc_cond queue_cond;
sc_cond wait_cond;
struct sc_clock clock;
struct sc_delayed_frame_queue queue;
bool stopped;
};
struct sc_delay_buffer_callbacks {
bool (*on_new_frame)(struct sc_delay_buffer *db, const AVFrame *frame,
void *userdata);
};
/**
* Initialize a delay buffer.
*
* \param delay a (strictly) positive delay
* \param first_frame_asap if true, do not delay the first frame (useful for
a video stream).
*/
void
sc_delay_buffer_init(struct sc_delay_buffer *db, sc_tick delay,
bool first_frame_asap);
#endif

View File

@@ -25,7 +25,6 @@ sc_demuxer_to_avcodec_id(uint32_t codec_id) {
#define SC_CODEC_ID_AV1 UINT32_C(0x00617631) // "av1" in ASCII
#define SC_CODEC_ID_OPUS UINT32_C(0x6f707573) // "opus" in ASCII
#define SC_CODEC_ID_AAC UINT32_C(0x00616163) // "aac in ASCII"
#define SC_CODEC_ID_RAW UINT32_C(0x00726177) // "raw" in ASCII
switch (codec_id) {
case SC_CODEC_ID_H264:
return AV_CODEC_ID_H264;
@@ -37,8 +36,6 @@ sc_demuxer_to_avcodec_id(uint32_t codec_id) {
return AV_CODEC_ID_OPUS;
case SC_CODEC_ID_AAC:
return AV_CODEC_ID_AAC;
case SC_CODEC_ID_RAW:
return AV_CODEC_ID_PCM_S16LE;
default:
LOGE("Unknown codec id 0x%08" PRIx32, codec_id);
return AV_CODEC_ID_NONE;
@@ -115,26 +112,86 @@ sc_demuxer_recv_packet(struct sc_demuxer *demuxer, AVPacket *packet) {
return true;
}
static bool
push_packet_to_sinks(struct sc_demuxer *demuxer, const AVPacket *packet) {
for (unsigned i = 0; i < demuxer->sink_count; ++i) {
struct sc_packet_sink *sink = demuxer->sinks[i];
if (!sink->ops->push(sink, packet)) {
return false;
}
}
return true;
}
static bool
sc_demuxer_push_packet(struct sc_demuxer *demuxer, AVPacket *packet) {
bool ok = push_packet_to_sinks(demuxer, packet);
if (!ok) {
LOGE("Demuxer '%s': could not process packet", demuxer->name);
return false;
}
return true;
}
static void
sc_demuxer_close_first_sinks(struct sc_demuxer *demuxer, unsigned count) {
while (count) {
struct sc_packet_sink *sink = demuxer->sinks[--count];
sink->ops->close(sink);
}
}
static inline void
sc_demuxer_close_sinks(struct sc_demuxer *demuxer) {
sc_demuxer_close_first_sinks(demuxer, demuxer->sink_count);
}
static bool
sc_demuxer_open_sinks(struct sc_demuxer *demuxer, const AVCodec *codec) {
for (unsigned i = 0; i < demuxer->sink_count; ++i) {
struct sc_packet_sink *sink = demuxer->sinks[i];
if (!sink->ops->open(sink, codec)) {
sc_demuxer_close_first_sinks(demuxer, i);
return false;
}
}
return true;
}
static void
sc_demuxer_disable_sinks(struct sc_demuxer *demuxer) {
for (unsigned i = 0; i < demuxer->sink_count; ++i) {
struct sc_packet_sink *sink = demuxer->sinks[i];
if (sink->ops->disable) {
sink->ops->disable(sink);
}
}
}
static int
run_demuxer(void *data) {
struct sc_demuxer *demuxer = data;
// Flag to report end-of-stream (i.e. device disconnected)
enum sc_demuxer_status status = SC_DEMUXER_STATUS_ERROR;
bool eos = false;
uint32_t raw_codec_id;
bool ok = sc_demuxer_recv_codec_id(demuxer, &raw_codec_id);
if (!ok) {
LOGE("Demuxer '%s': stream disabled due to connection error",
demuxer->name);
eos = true;
goto end;
}
if (raw_codec_id == 0) {
LOGW("Demuxer '%s': stream explicitly disabled by the device",
demuxer->name);
sc_packet_source_sinks_disable(&demuxer->packet_source);
status = SC_DEMUXER_STATUS_DISABLED;
sc_demuxer_disable_sinks(demuxer);
eos = true;
goto end;
}
@@ -148,7 +205,7 @@ run_demuxer(void *data) {
if (codec_id == AV_CODEC_ID_NONE) {
LOGE("Demuxer '%s': stream disabled due to unsupported codec",
demuxer->name);
sc_packet_source_sinks_disable(&demuxer->packet_source);
sc_demuxer_disable_sinks(demuxer);
goto end;
}
@@ -156,11 +213,11 @@ run_demuxer(void *data) {
if (!codec) {
LOGE("Demuxer '%s': stream disabled due to missing decoder",
demuxer->name);
sc_packet_source_sinks_disable(&demuxer->packet_source);
sc_demuxer_disable_sinks(demuxer);
goto end;
}
if (!sc_packet_source_sinks_open(&demuxer->packet_source, codec)) {
if (!sc_demuxer_open_sinks(demuxer, codec)) {
goto end;
}
@@ -184,7 +241,7 @@ run_demuxer(void *data) {
bool ok = sc_demuxer_recv_packet(demuxer, packet);
if (!ok) {
// end of stream
status = SC_DEMUXER_STATUS_EOS;
eos = true;
break;
}
@@ -197,10 +254,10 @@ run_demuxer(void *data) {
}
}
ok = sc_packet_source_sinks_push(&demuxer->packet_source, packet);
ok = sc_demuxer_push_packet(demuxer, packet);
av_packet_unref(packet);
if (!ok) {
// The sink already logged its concrete error
// cannot process packet (error already logged)
break;
}
}
@@ -213,9 +270,9 @@ run_demuxer(void *data) {
av_packet_free(&packet);
finally_close_sinks:
sc_packet_source_sinks_close(&demuxer->packet_source);
sc_demuxer_close_sinks(demuxer);
end:
demuxer->cbs->on_ended(demuxer, status, demuxer->cbs_userdata);
demuxer->cbs->on_ended(demuxer, eos, demuxer->cbs_userdata);
return 0;
}
@@ -227,7 +284,7 @@ sc_demuxer_init(struct sc_demuxer *demuxer, const char *name, sc_socket socket,
demuxer->name = name; // statically allocated
demuxer->socket = socket;
sc_packet_source_init(&demuxer->packet_source);
demuxer->sink_count = 0;
assert(cbs && cbs->on_ended);
@@ -235,6 +292,14 @@ sc_demuxer_init(struct sc_demuxer *demuxer, const char *name, sc_socket socket,
demuxer->cbs_userdata = cbs_userdata;
}
void
sc_demuxer_add_sink(struct sc_demuxer *demuxer, struct sc_packet_sink *sink) {
assert(demuxer->sink_count < SC_DEMUXER_MAX_SINKS);
assert(sink);
assert(sink->ops);
demuxer->sinks[demuxer->sink_count++] = sink;
}
bool
sc_demuxer_start(struct sc_demuxer *demuxer) {
LOGD("Demuxer '%s': starting thread", demuxer->name);

View File

@@ -8,32 +8,27 @@
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include "trait/packet_source.h"
#include "trait/packet_sink.h"
#include "util/net.h"
#include "util/thread.h"
struct sc_demuxer {
struct sc_packet_source packet_source; // packet source trait
#define SC_DEMUXER_MAX_SINKS 2
struct sc_demuxer {
const char *name; // must be statically allocated (e.g. a string literal)
sc_socket socket;
sc_thread thread;
struct sc_packet_sink *sinks[SC_DEMUXER_MAX_SINKS];
unsigned sink_count;
const struct sc_demuxer_callbacks *cbs;
void *cbs_userdata;
};
enum sc_demuxer_status {
SC_DEMUXER_STATUS_EOS,
SC_DEMUXER_STATUS_DISABLED,
SC_DEMUXER_STATUS_ERROR,
};
struct sc_demuxer_callbacks {
void (*on_ended)(struct sc_demuxer *demuxer, enum sc_demuxer_status,
void *userdata);
void (*on_ended)(struct sc_demuxer *demuxer, bool eos, void *userdata);
};
// The name must be statically allocated (e.g. a string literal)
@@ -41,6 +36,9 @@ void
sc_demuxer_init(struct sc_demuxer *demuxer, const char *name, sc_socket socket,
const struct sc_demuxer_callbacks *cbs, void *cbs_userdata);
void
sc_demuxer_add_sink(struct sc_demuxer *demuxer, struct sc_packet_sink *sink);
bool
sc_demuxer_start(struct sc_demuxer *demuxer);

View File

@@ -19,7 +19,7 @@ sc_file_pusher_init(struct sc_file_pusher *fp, const char *serial,
const char *push_target) {
assert(serial);
sc_vecdeque_init(&fp->queue);
cbuf_init(&fp->queue);
bool ok = sc_mutex_init(&fp->mutex);
if (!ok) {
@@ -65,10 +65,9 @@ sc_file_pusher_destroy(struct sc_file_pusher *fp) {
sc_intr_destroy(&fp->intr);
free(fp->serial);
while (!sc_vecdeque_is_empty(&fp->queue)) {
struct sc_file_pusher_request *req = sc_vecdeque_popref(&fp->queue);
assert(req);
sc_file_pusher_request_destroy(req);
struct sc_file_pusher_request req;
while (cbuf_take(&fp->queue, &req)) {
sc_file_pusher_request_destroy(&req);
}
}
@@ -92,20 +91,13 @@ sc_file_pusher_request(struct sc_file_pusher *fp,
};
sc_mutex_lock(&fp->mutex);
bool was_empty = sc_vecdeque_is_empty(&fp->queue);
bool res = sc_vecdeque_push(&fp->queue, req);
if (!res) {
LOG_OOM();
sc_mutex_unlock(&fp->mutex);
return false;
}
bool was_empty = cbuf_is_empty(&fp->queue);
bool res = cbuf_push(&fp->queue, req);
if (was_empty) {
sc_cond_signal(&fp->event_cond);
}
sc_mutex_unlock(&fp->mutex);
return true;
return res;
}
static int
@@ -121,7 +113,7 @@ run_file_pusher(void *data) {
for (;;) {
sc_mutex_lock(&fp->mutex);
while (!fp->stopped && sc_vecdeque_is_empty(&fp->queue)) {
while (!fp->stopped && cbuf_is_empty(&fp->queue)) {
sc_cond_wait(&fp->event_cond, &fp->mutex);
}
if (fp->stopped) {
@@ -129,9 +121,10 @@ run_file_pusher(void *data) {
sc_mutex_unlock(&fp->mutex);
break;
}
assert(!sc_vecdeque_is_empty(&fp->queue));
struct sc_file_pusher_request req = sc_vecdeque_pop(&fp->queue);
struct sc_file_pusher_request req;
bool non_empty = cbuf_take(&fp->queue, &req);
assert(non_empty);
(void) non_empty;
sc_mutex_unlock(&fp->mutex);
if (req.action == SC_FILE_PUSHER_ACTION_INSTALL_APK) {

View File

@@ -5,9 +5,9 @@
#include <stdbool.h>
#include "util/intr.h"
#include "util/cbuf.h"
#include "util/thread.h"
#include "util/vecdeque.h"
#include "util/intr.h"
enum sc_file_pusher_action {
SC_FILE_PUSHER_ACTION_INSTALL_APK,
@@ -19,7 +19,7 @@ struct sc_file_pusher_request {
char *file;
};
struct sc_file_pusher_request_queue SC_VECDEQUE(struct sc_file_pusher_request);
struct sc_file_pusher_request_queue CBUF(struct sc_file_pusher_request, 16);
struct sc_file_pusher {
char *serial;

View File

@@ -69,7 +69,7 @@ decode_image(const char *path) {
}
if (avformat_open_input(&ctx, path, NULL, NULL) < 0) {
LOGE("Could not open icon image: %s", path);
LOGE("Could not open image codec: %s", path);
goto free_ctx;
}

View File

@@ -4,6 +4,10 @@
#include <stdbool.h>
#include <unistd.h>
#include <libavformat/avformat.h>
#ifdef _WIN32
#include <windows.h>
#include "util/str.h"
#endif
#ifdef HAVE_V4L2
# include <libavdevice/avdevice.h>
#endif
@@ -15,14 +19,8 @@
#include "scrcpy.h"
#include "usb/scrcpy_otg.h"
#include "util/log.h"
#include "util/net.h"
#include "version.h"
#ifdef _WIN32
#include <windows.h>
#include "util/str.h"
#endif
int
main_scrcpy(int argc, char *argv[]) {
#ifdef _WIN32
@@ -71,7 +69,7 @@ main_scrcpy(int argc, char *argv[]) {
}
#endif
if (!net_init()) {
if (avformat_network_init()) {
return SCRCPY_EXIT_FAILURE;
}
@@ -84,6 +82,8 @@ main_scrcpy(int argc, char *argv[]) {
enum scrcpy_exit_code ret = scrcpy(&args.opts);
#endif
avformat_network_deinit(); // ignore failure
return ret;
}

View File

@@ -43,7 +43,6 @@ const struct scrcpy_options scrcpy_options_default = {
.display_id = 0,
.display_buffer = 0,
.v4l2_buffer = 0,
.audio_buffer = SC_TICK_FROM_MS(50),
#ifdef HAVE_USB
.otg = false,
#endif
@@ -73,7 +72,6 @@ const struct scrcpy_options scrcpy_options_default = {
.start_fps_counter = false,
.power_on = true,
.audio = true,
.require_audio = false,
.list_encoders = false,
.list_displays = false,
};

View File

@@ -29,7 +29,6 @@ enum sc_codec {
SC_CODEC_AV1,
SC_CODEC_OPUS,
SC_CODEC_AAC,
SC_CODEC_RAW,
};
enum sc_lock_video_orientation {
@@ -126,7 +125,6 @@ struct scrcpy_options {
uint32_t display_id;
sc_tick display_buffer;
sc_tick v4l2_buffer;
sc_tick audio_buffer;
#ifdef HAVE_USB
bool otg;
#endif
@@ -156,7 +154,6 @@ struct scrcpy_options {
bool start_fps_counter;
bool power_on;
bool audio;
bool require_audio;
bool list_encoders;
bool list_displays;
};

View File

@@ -33,27 +33,41 @@ find_muxer(const char *name) {
return oformat;
}
static AVPacket *
sc_recorder_packet_ref(const AVPacket *packet) {
AVPacket *p = av_packet_alloc();
if (!p) {
static struct sc_record_packet *
sc_record_packet_new(const AVPacket *packet) {
struct sc_record_packet *rec = malloc(sizeof(*rec));
if (!rec) {
LOG_OOM();
return NULL;
}
if (av_packet_ref(p, packet)) {
av_packet_free(&p);
rec->packet = av_packet_alloc();
if (!rec->packet) {
LOG_OOM();
free(rec);
return NULL;
}
return p;
if (av_packet_ref(rec->packet, packet)) {
av_packet_free(&rec->packet);
free(rec);
return NULL;
}
return rec;
}
static void
sc_record_packet_delete(struct sc_record_packet *rec) {
av_packet_free(&rec->packet);
free(rec);
}
static void
sc_recorder_queue_clear(struct sc_recorder_queue *queue) {
while (!sc_vecdeque_is_empty(queue)) {
AVPacket *p = sc_vecdeque_pop(queue);
av_packet_free(&p);
while (!sc_queue_is_empty(queue)) {
struct sc_record_packet *rec;
sc_queue_take(queue, next, &rec);
sc_record_packet_delete(rec);
}
}
@@ -202,12 +216,7 @@ sc_recorder_wait_audio_stream(struct sc_recorder *recorder) {
stream->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
stream->codecpar->codec_id = codec->id;
#ifdef SCRCPY_LAVU_HAS_CHLAYOUT
stream->codecpar->ch_layout.nb_channels = 2;
#else
stream->codecpar->channel_layout = AV_CH_LAYOUT_STEREO;
stream->codecpar->channels = 2;
#endif
stream->codecpar->sample_rate = 48000;
recorder->audio_stream_index = stream->index;
@@ -218,12 +227,12 @@ sc_recorder_wait_audio_stream(struct sc_recorder *recorder) {
static inline bool
sc_recorder_has_empty_queues(struct sc_recorder *recorder) {
if (sc_vecdeque_is_empty(&recorder->video_queue)) {
if (sc_queue_is_empty(&recorder->video_queue)) {
// The video queue is empty
return true;
}
if (recorder->audio && sc_vecdeque_is_empty(&recorder->audio_queue)) {
if (recorder->audio && sc_queue_is_empty(&recorder->audio_queue)) {
// The audio queue is empty (when audio is enabled)
return true;
}
@@ -240,27 +249,27 @@ sc_recorder_process_header(struct sc_recorder *recorder) {
sc_cond_wait(&recorder->queue_cond, &recorder->mutex);
}
if (sc_vecdeque_is_empty(&recorder->video_queue)) {
assert(recorder->stopped);
if (recorder->stopped && sc_queue_is_empty(&recorder->video_queue)) {
// If the recorder is stopped, don't process anything if there are not
// at least video packets
sc_mutex_unlock(&recorder->mutex);
return false;
}
AVPacket *video_pkt = sc_vecdeque_pop(&recorder->video_queue);
struct sc_record_packet *video_pkt;
sc_queue_take(&recorder->video_queue, next, &video_pkt);
AVPacket *audio_pkt = NULL;
if (!sc_vecdeque_is_empty(&recorder->audio_queue)) {
struct sc_record_packet *audio_pkt = NULL;
if (!sc_queue_is_empty(&recorder->audio_queue)) {
assert(recorder->audio);
audio_pkt = sc_vecdeque_pop(&recorder->audio_queue);
sc_queue_take(&recorder->audio_queue, next, &audio_pkt);
}
sc_mutex_unlock(&recorder->mutex);
int ret = false;
if (video_pkt->pts != AV_NOPTS_VALUE) {
if (video_pkt->packet->pts != AV_NOPTS_VALUE) {
LOGE("The first video packet is not a config packet");
goto end;
}
@@ -268,13 +277,13 @@ sc_recorder_process_header(struct sc_recorder *recorder) {
assert(recorder->video_stream_index >= 0);
AVStream *video_stream =
recorder->ctx->streams[recorder->video_stream_index];
bool ok = sc_recorder_set_extradata(video_stream, video_pkt);
bool ok = sc_recorder_set_extradata(video_stream, video_pkt->packet);
if (!ok) {
goto end;
}
if (audio_pkt) {
if (audio_pkt->pts != AV_NOPTS_VALUE) {
if (audio_pkt->packet->pts != AV_NOPTS_VALUE) {
LOGE("The first audio packet is not a config packet");
goto end;
}
@@ -282,7 +291,7 @@ sc_recorder_process_header(struct sc_recorder *recorder) {
assert(recorder->audio_stream_index >= 0);
AVStream *audio_stream =
recorder->ctx->streams[recorder->audio_stream_index];
ok = sc_recorder_set_extradata(audio_stream, audio_pkt);
ok = sc_recorder_set_extradata(audio_stream, audio_pkt->packet);
if (!ok) {
goto end;
}
@@ -297,9 +306,9 @@ sc_recorder_process_header(struct sc_recorder *recorder) {
ret = true;
end:
av_packet_free(&video_pkt);
sc_record_packet_delete(video_pkt);
if (audio_pkt) {
av_packet_free(&audio_pkt);
sc_record_packet_delete(audio_pkt);
}
return ret;
@@ -314,12 +323,12 @@ sc_recorder_process_packets(struct sc_recorder *recorder) {
return false;
}
AVPacket *video_pkt = NULL;
AVPacket *audio_pkt = NULL;
struct sc_record_packet *video_pkt = NULL;
struct sc_record_packet *audio_pkt = NULL;
// We can write a video packet only once we received the next one so that
// we can set its duration (next_pts - current_pts)
AVPacket *video_pkt_previous = NULL;
struct sc_record_packet *video_pkt_previous = NULL;
bool error = false;
@@ -327,12 +336,12 @@ sc_recorder_process_packets(struct sc_recorder *recorder) {
sc_mutex_lock(&recorder->mutex);
while (!recorder->stopped) {
if (!video_pkt && !sc_vecdeque_is_empty(&recorder->video_queue)) {
if (!video_pkt && !sc_queue_is_empty(&recorder->video_queue)) {
// A new packet may be assigned to video_pkt and be processed
break;
}
if (recorder->audio && !audio_pkt
&& !sc_vecdeque_is_empty(&recorder->audio_queue)) {
&& !sc_queue_is_empty(&recorder->audio_queue)) {
// A new packet may be assigned to audio_pkt and be processed
break;
}
@@ -344,20 +353,20 @@ sc_recorder_process_packets(struct sc_recorder *recorder) {
// If there is no audio, then the audio_queue will remain empty forever
// and audio_pkt will always be NULL.
assert(recorder->audio || (!audio_pkt
&& sc_vecdeque_is_empty(&recorder->audio_queue)));
assert(recorder->audio
|| (!audio_pkt && sc_queue_is_empty(&recorder->audio_queue)));
if (!video_pkt && !sc_vecdeque_is_empty(&recorder->video_queue)) {
video_pkt = sc_vecdeque_pop(&recorder->video_queue);
if (!video_pkt && !sc_queue_is_empty(&recorder->video_queue)) {
sc_queue_take(&recorder->video_queue, next, &video_pkt);
}
if (!audio_pkt && !sc_vecdeque_is_empty(&recorder->audio_queue)) {
audio_pkt = sc_vecdeque_pop(&recorder->audio_queue);
if (!audio_pkt && !sc_queue_is_empty(&recorder->audio_queue)) {
sc_queue_take(&recorder->audio_queue, next, &audio_pkt);
}
if (recorder->stopped && !video_pkt && !audio_pkt) {
assert(sc_vecdeque_is_empty(&recorder->video_queue));
assert(sc_vecdeque_is_empty(&recorder->audio_queue));
assert(sc_queue_is_empty(&recorder->video_queue));
assert(sc_queue_is_empty(&recorder->audio_queue));
sc_mutex_unlock(&recorder->mutex);
break;
}
@@ -369,32 +378,38 @@ sc_recorder_process_packets(struct sc_recorder *recorder) {
// Ignore further config packets (e.g. on device orientation
// change). The next non-config packet will have the config packet
// data prepended.
if (video_pkt && video_pkt->pts == AV_NOPTS_VALUE) {
av_packet_free(&video_pkt);
if (video_pkt && video_pkt->packet->pts == AV_NOPTS_VALUE) {
sc_record_packet_delete(video_pkt);
video_pkt = NULL;
}
if (audio_pkt && audio_pkt->pts == AV_NOPTS_VALUE) {
av_packet_free(&audio_pkt);
audio_pkt = NULL;
if (audio_pkt && audio_pkt->packet->pts == AV_NOPTS_VALUE) {
sc_record_packet_delete(audio_pkt);
audio_pkt= NULL;
}
if (pts_origin == AV_NOPTS_VALUE) {
if (!recorder->audio) {
assert(video_pkt);
pts_origin = video_pkt->pts;
pts_origin = video_pkt->packet->pts;
} else if (video_pkt && audio_pkt) {
pts_origin = MIN(video_pkt->pts, audio_pkt->pts);
pts_origin =
MIN(video_pkt->packet->pts, audio_pkt->packet->pts);
} else if (recorder->stopped) {
if (video_pkt) {
// The recorder is stopped without audio, record the video
// packets
pts_origin = video_pkt->pts;
pts_origin = video_pkt->packet->pts;
} else {
// Fail if there is no video
error = true;
goto end;
}
// If the recorder is stopped while one of the streams has no
// packets, then we must avoid a live-loop and correctly record
// the stream having packets.
pts_origin = video_pkt ? video_pkt->packet->pts
: audio_pkt->packet->pts;
} else {
// We need both video and audio packets to initialize pts_origin
continue;
@@ -404,16 +419,17 @@ sc_recorder_process_packets(struct sc_recorder *recorder) {
assert(pts_origin != AV_NOPTS_VALUE);
if (video_pkt) {
video_pkt->pts -= pts_origin;
video_pkt->dts = video_pkt->pts;
video_pkt->packet->pts -= pts_origin;
video_pkt->packet->dts = video_pkt->packet->pts;
if (video_pkt_previous) {
// we now know the duration of the previous packet
video_pkt_previous->duration = video_pkt->pts
- video_pkt_previous->pts;
video_pkt_previous->packet->duration =
video_pkt->packet->pts - video_pkt_previous->packet->pts;
bool ok = sc_recorder_write_video(recorder, video_pkt_previous);
av_packet_free(&video_pkt_previous);
bool ok = sc_recorder_write_video(recorder,
video_pkt_previous->packet);
sc_record_packet_delete(video_pkt_previous);
if (!ok) {
LOGE("Could not record video packet");
error = true;
@@ -426,34 +442,34 @@ sc_recorder_process_packets(struct sc_recorder *recorder) {
}
if (audio_pkt) {
audio_pkt->pts -= pts_origin;
audio_pkt->dts = audio_pkt->pts;
audio_pkt->packet->pts -= pts_origin;
audio_pkt->packet->dts = audio_pkt->packet->pts;
bool ok = sc_recorder_write_audio(recorder, audio_pkt);
bool ok = sc_recorder_write_audio(recorder, audio_pkt->packet);
if (!ok) {
LOGE("Could not record audio packet");
error = true;
goto end;
}
av_packet_free(&audio_pkt);
sc_record_packet_delete(audio_pkt);
audio_pkt = NULL;
}
}
// Write the last video packet
AVPacket *last = video_pkt_previous;
struct sc_record_packet *last = video_pkt_previous;
if (last) {
// assign an arbitrary duration to the last packet
last->duration = 100000;
bool ok = sc_recorder_write_video(recorder, last);
last->packet->duration = 100000;
bool ok = sc_recorder_write_video(recorder, last->packet);
if (!ok) {
// failing to write the last frame is not very serious, no
// future frame may depend on it, so the resulting file
// will still be valid
LOGW("Could not record last packet");
}
av_packet_free(&last);
sc_record_packet_delete(last);
}
int ret = av_write_trailer(recorder->ctx);
@@ -464,10 +480,10 @@ sc_recorder_process_packets(struct sc_recorder *recorder) {
end:
if (video_pkt) {
av_packet_free(&video_pkt);
sc_record_packet_delete(video_pkt);
}
if (audio_pkt) {
av_packet_free(&audio_pkt);
sc_record_packet_delete(audio_pkt);
}
return !error;
@@ -505,10 +521,6 @@ static int
run_recorder(void *data) {
struct sc_recorder *recorder = data;
// Recording is a background task
bool ok = sc_thread_set_priority(SC_THREAD_PRIORITY_LOW);
(void) ok; // We don't care if it worked
bool success = sc_recorder_record(recorder);
sc_mutex_lock(&recorder->mutex);
@@ -516,7 +528,6 @@ run_recorder(void *data) {
recorder->stopped = true;
// Discard pending packets
sc_recorder_queue_clear(&recorder->video_queue);
sc_recorder_queue_clear(&recorder->audio_queue);
sc_mutex_unlock(&recorder->mutex);
if (success) {
@@ -577,22 +588,16 @@ sc_recorder_video_packet_sink_push(struct sc_packet_sink *sink,
return false;
}
AVPacket *rec = sc_recorder_packet_ref(packet);
struct sc_record_packet *rec = sc_record_packet_new(packet);
if (!rec) {
LOG_OOM();
sc_mutex_unlock(&recorder->mutex);
return false;
}
rec->stream_index = recorder->video_stream_index;
bool ok = sc_vecdeque_push(&recorder->video_queue, rec);
if (!ok) {
LOG_OOM();
sc_mutex_unlock(&recorder->mutex);
return false;
}
rec->packet->stream_index = 0;
sc_queue_push(&recorder->video_queue, next, rec);
sc_cond_signal(&recorder->queue_cond);
sc_mutex_unlock(&recorder->mutex);
@@ -646,22 +651,16 @@ sc_recorder_audio_packet_sink_push(struct sc_packet_sink *sink,
return false;
}
AVPacket *rec = sc_recorder_packet_ref(packet);
struct sc_record_packet *rec = sc_record_packet_new(packet);
if (!rec) {
LOG_OOM();
sc_mutex_unlock(&recorder->mutex);
return false;
}
rec->stream_index = recorder->audio_stream_index;
bool ok = sc_vecdeque_push(&recorder->audio_queue, rec);
if (!ok) {
LOG_OOM();
sc_mutex_unlock(&recorder->mutex);
return false;
}
rec->packet->stream_index = 1;
sc_queue_push(&recorder->audio_queue, next, rec);
sc_cond_signal(&recorder->queue_cond);
sc_mutex_unlock(&recorder->mutex);
@@ -712,8 +711,8 @@ sc_recorder_init(struct sc_recorder *recorder, const char *filename,
recorder->audio = audio;
sc_vecdeque_init(&recorder->video_queue);
sc_vecdeque_init(&recorder->audio_queue);
sc_queue_init(&recorder->video_queue);
sc_queue_init(&recorder->audio_queue);
recorder->stopped = false;
recorder->video_codec = NULL;

View File

@@ -9,10 +9,15 @@
#include "coords.h"
#include "options.h"
#include "trait/packet_sink.h"
#include "util/queue.h"
#include "util/thread.h"
#include "util/vecdeque.h"
struct sc_recorder_queue SC_VECDEQUE(AVPacket *);
struct sc_record_packet {
AVPacket *packet;
struct sc_record_packet *next;
};
struct sc_recorder_queue SC_QUEUE(struct sc_record_packet);
struct sc_recorder {
struct sc_packet_sink video_packet_sink;

View File

@@ -16,7 +16,6 @@
#include "audio_player.h"
#include "controller.h"
#include "decoder.h"
#include "delay_buffer.h"
#include "demuxer.h"
#include "events.h"
#include "file_pusher.h"
@@ -48,10 +47,8 @@ struct scrcpy {
struct sc_decoder video_decoder;
struct sc_decoder audio_decoder;
struct sc_recorder recorder;
struct sc_delay_buffer display_buffer;
#ifdef HAVE_V4L2
struct sc_v4l2_sink v4l2_sink;
struct sc_delay_buffer v4l2_buffer;
#endif
struct sc_controller controller;
struct sc_file_pusher file_pusher;
@@ -221,15 +218,12 @@ sc_recorder_on_ended(struct sc_recorder *recorder, bool success,
}
static void
sc_video_demuxer_on_ended(struct sc_demuxer *demuxer,
enum sc_demuxer_status status, void *userdata) {
sc_video_demuxer_on_ended(struct sc_demuxer *demuxer, bool eos,
void *userdata) {
(void) demuxer;
(void) userdata;
// The device may not decide to disable the video
assert(status != SC_DEMUXER_STATUS_DISABLED);
if (status == SC_DEMUXER_STATUS_EOS) {
if (eos) {
PUSH_EVENT(SC_EVENT_DEVICE_DISCONNECTED);
} else {
PUSH_EVENT(SC_EVENT_DEMUXER_ERROR);
@@ -237,17 +231,20 @@ sc_video_demuxer_on_ended(struct sc_demuxer *demuxer,
}
static void
sc_audio_demuxer_on_ended(struct sc_demuxer *demuxer,
enum sc_demuxer_status status, void *userdata) {
sc_audio_demuxer_on_ended(struct sc_demuxer *demuxer, bool eos,
void *userdata) {
(void) demuxer;
(void) userdata;
const struct scrcpy_options *options = userdata;
// Contrary to the video demuxer, keep mirroring if only the audio fails.
// 'eos' is true on end-of-stream, including when audio capture is not
// possible on the device (so that scrcpy continue to mirror video without
// failing).
// However, if an audio configuration failure occurs (for example the user
// explicitly selected an unknown audio encoder), 'eos' is false and scrcpy
// must exit.
// Contrary to the video demuxer, keep mirroring if only the audio fails
// (unless --require-audio is set).
if (status == SC_DEMUXER_STATUS_ERROR
|| (status == SC_DEMUXER_STATUS_DISABLED
&& options->require_audio)) {
if (!eos) {
PUSH_EVENT(SC_EVENT_DEMUXER_ERROR);
}
}
@@ -446,7 +443,7 @@ scrcpy(struct scrcpy_options *options) {
.on_ended = sc_audio_demuxer_on_ended,
};
sc_demuxer_init(&s->audio_demuxer, "audio", s->server.audio_socket,
&audio_demuxer_cbs, options);
&audio_demuxer_cbs, NULL);
}
bool needs_video_decoder = options->display;
@@ -456,13 +453,11 @@ scrcpy(struct scrcpy_options *options) {
#endif
if (needs_video_decoder) {
sc_decoder_init(&s->video_decoder, "video");
sc_packet_source_add_sink(&s->video_demuxer.packet_source,
&s->video_decoder.packet_sink);
sc_demuxer_add_sink(&s->video_demuxer, &s->video_decoder.packet_sink);
}
if (needs_audio_decoder) {
sc_decoder_init(&s->audio_decoder, "audio");
sc_packet_source_add_sink(&s->audio_demuxer.packet_source,
&s->audio_decoder.packet_sink);
sc_demuxer_add_sink(&s->audio_demuxer, &s->audio_decoder.packet_sink);
}
if (options->record_filename) {
@@ -481,11 +476,10 @@ scrcpy(struct scrcpy_options *options) {
}
recorder_started = true;
sc_packet_source_add_sink(&s->video_demuxer.packet_source,
&s->recorder.video_packet_sink);
sc_demuxer_add_sink(&s->video_demuxer, &s->recorder.video_packet_sink);
if (options->audio) {
sc_packet_source_add_sink(&s->audio_demuxer.packet_source,
&s->recorder.audio_packet_sink);
sc_demuxer_add_sink(&s->audio_demuxer,
&s->recorder.audio_packet_sink);
}
}
@@ -669,6 +663,7 @@ aoa_hid_end:
.mipmaps = options->mipmaps,
.fullscreen = options->fullscreen,
.start_fps_counter = options->start_fps_counter,
.buffering_time = options->display_buffer,
};
if (!sc_screen_init(&s->screen, &screen_params)) {
@@ -676,38 +671,22 @@ aoa_hid_end:
}
screen_initialized = true;
struct sc_frame_source *src = &s->video_decoder.frame_source;
if (options->display_buffer) {
sc_delay_buffer_init(&s->display_buffer, options->display_buffer,
true);
sc_frame_source_add_sink(src, &s->display_buffer.frame_sink);
src = &s->display_buffer.frame_source;
}
sc_frame_source_add_sink(src, &s->screen.frame_sink);
sc_decoder_add_sink(&s->video_decoder, &s->screen.frame_sink);
if (options->audio) {
sc_audio_player_init(&s->audio_player, options->audio_buffer);
sc_frame_source_add_sink(&s->audio_decoder.frame_source,
&s->audio_player.frame_sink);
sc_audio_player_init(&s->audio_player);
sc_decoder_add_sink(&s->audio_decoder, &s->audio_player.frame_sink);
}
}
#ifdef HAVE_V4L2
if (options->v4l2_device) {
if (!sc_v4l2_sink_init(&s->v4l2_sink, options->v4l2_device,
info->frame_size)) {
info->frame_size, options->v4l2_buffer)) {
goto end;
}
struct sc_frame_source *src = &s->video_decoder.frame_source;
if (options->v4l2_buffer) {
sc_delay_buffer_init(&s->v4l2_buffer, options->v4l2_buffer, true);
sc_frame_source_add_sink(src, &s->v4l2_buffer.frame_sink);
src = &s->v4l2_buffer.frame_source;
}
sc_frame_source_add_sink(src, &s->v4l2_sink.frame_sink);
sc_decoder_add_sink(&s->video_decoder, &s->v4l2_sink.frame_sink);
v4l2_sink_initialized = true;
}

View File

@@ -7,6 +7,7 @@
#include "events.h"
#include "icon.h"
#include "options.h"
#include "video_buffer.h"
#include "util/log.h"
#define DISPLAY_MARGINS 96
@@ -332,7 +333,6 @@ static bool
sc_screen_frame_sink_open(struct sc_frame_sink *sink,
const AVCodecContext *ctx) {
assert(ctx->pix_fmt == AV_PIX_FMT_YUV420P);
(void) ctx;
struct sc_screen *screen = DOWNCAST(sink);
(void) screen;
@@ -358,18 +358,30 @@ sc_screen_frame_sink_close(struct sc_frame_sink *sink) {
static bool
sc_screen_frame_sink_push(struct sc_frame_sink *sink, const AVFrame *frame) {
struct sc_screen *screen = DOWNCAST(sink);
return sc_video_buffer_push(&screen->vb, frame);
}
bool previous_skipped;
bool ok = sc_frame_buffer_push(&screen->fb, frame, &previous_skipped);
if (!ok) {
return false;
}
static void
sc_video_buffer_on_new_frame(struct sc_video_buffer *vb, bool previous_skipped,
void *userdata) {
(void) vb;
struct sc_screen *screen = userdata;
// event_failed implies previous_skipped (the previous frame may not have
// been consumed if the event was not sent)
assert(!screen->event_failed || previous_skipped);
bool need_new_event;
if (previous_skipped) {
sc_fps_counter_add_skipped_frame(&screen->fps_counter);
// The SC_EVENT_NEW_FRAME triggered for the previous frame will consume
// this new frame instead
// this new frame instead, unless the previous event failed
need_new_event = screen->event_failed;
} else {
need_new_event = true;
}
if (need_new_event) {
static SDL_Event new_frame_event = {
.type = SC_EVENT_NEW_FRAME,
};
@@ -378,11 +390,11 @@ sc_screen_frame_sink_push(struct sc_frame_sink *sink, const AVFrame *frame) {
int ret = SDL_PushEvent(&new_frame_event);
if (ret < 0) {
LOGW("Could not post new frame event: %s", SDL_GetError());
return false;
screen->event_failed = true;
} else {
screen->event_failed = false;
}
}
return true;
}
bool
@@ -392,6 +404,7 @@ sc_screen_init(struct sc_screen *screen,
screen->has_frame = false;
screen->fullscreen = false;
screen->maximized = false;
screen->event_failed = false;
screen->mouse_capture_key_pressed = 0;
screen->req.x = params->window_x;
@@ -401,13 +414,23 @@ sc_screen_init(struct sc_screen *screen,
screen->req.fullscreen = params->fullscreen;
screen->req.start_fps_counter = params->start_fps_counter;
bool ok = sc_frame_buffer_init(&screen->fb);
static const struct sc_video_buffer_callbacks cbs = {
.on_new_frame = sc_video_buffer_on_new_frame,
};
bool ok = sc_video_buffer_init(&screen->vb, params->buffering_time, &cbs,
screen);
if (!ok) {
return false;
}
ok = sc_video_buffer_start(&screen->vb);
if (!ok) {
goto error_destroy_video_buffer;
}
if (!sc_fps_counter_init(&screen->fps_counter)) {
goto error_destroy_frame_buffer;
goto error_stop_and_join_video_buffer;
}
screen->frame_size = params->frame_size;
@@ -539,8 +562,11 @@ error_destroy_window:
SDL_DestroyWindow(screen->window);
error_destroy_fps_counter:
sc_fps_counter_destroy(&screen->fps_counter);
error_destroy_frame_buffer:
sc_frame_buffer_destroy(&screen->fb);
error_stop_and_join_video_buffer:
sc_video_buffer_stop(&screen->vb);
sc_video_buffer_join(&screen->vb);
error_destroy_video_buffer:
sc_video_buffer_destroy(&screen->vb);
return false;
}
@@ -577,11 +603,13 @@ sc_screen_hide_window(struct sc_screen *screen) {
void
sc_screen_interrupt(struct sc_screen *screen) {
sc_video_buffer_stop(&screen->vb);
sc_fps_counter_interrupt(&screen->fps_counter);
}
void
sc_screen_join(struct sc_screen *screen) {
sc_video_buffer_join(&screen->vb);
sc_fps_counter_join(&screen->fps_counter);
}
@@ -595,7 +623,7 @@ sc_screen_destroy(struct sc_screen *screen) {
SDL_DestroyRenderer(screen->renderer);
SDL_DestroyWindow(screen->window);
sc_fps_counter_destroy(&screen->fps_counter);
sc_frame_buffer_destroy(&screen->fb);
sc_video_buffer_destroy(&screen->vb);
}
static void
@@ -701,7 +729,7 @@ update_texture(struct sc_screen *screen, const AVFrame *frame) {
static bool
sc_screen_update_frame(struct sc_screen *screen) {
av_frame_unref(screen->frame);
sc_frame_buffer_consume(&screen->fb, screen->frame);
sc_video_buffer_consume(&screen->vb, screen->frame);
AVFrame *frame = screen->frame;
sc_fps_counter_add_rendered_frame(&screen->fps_counter);

View File

@@ -10,12 +10,12 @@
#include "controller.h"
#include "coords.h"
#include "fps_counter.h"
#include "frame_buffer.h"
#include "input_manager.h"
#include "opengl.h"
#include "trait/key_processor.h"
#include "trait/frame_sink.h"
#include "trait/mouse_processor.h"
#include "video_buffer.h"
struct sc_screen {
struct sc_frame_sink frame_sink; // frame sink trait
@@ -25,7 +25,7 @@ struct sc_screen {
#endif
struct sc_input_manager im;
struct sc_frame_buffer fb;
struct sc_video_buffer vb;
struct sc_fps_counter fps_counter;
// The initial requested window properties
@@ -59,6 +59,8 @@ struct sc_screen {
bool maximized;
bool mipmaps;
bool event_failed; // in case SDL_PushEvent() returned an error
// To enable/disable mouse capture, a mouse capture key (LALT, LGUI or
// RGUI) must be pressed. This variable tracks the pressed capture key.
SDL_Keycode mouse_capture_key_pressed;
@@ -93,6 +95,8 @@ struct sc_screen_params {
bool fullscreen;
bool start_fps_counter;
sc_tick buffering_time;
};
// initialize screen, create window, renderer and texture (window is hidden)

View File

@@ -173,8 +173,6 @@ sc_server_get_codec_name(enum sc_codec codec) {
return "opus";
case SC_CODEC_AAC:
return "aac";
case SC_CODEC_RAW:
return "raw";
default:
return NULL;
}
@@ -216,7 +214,7 @@ execute_server(struct sc_server *server,
unsigned dyn_idx = count; // from there, the strings are allocated
#define ADD_PARAM(fmt, ...) { \
char *p; \
char *p = (char *) &cmd[count]; \
if (asprintf(&p, fmt, ## __VA_ARGS__) == -1) { \
goto end; \
} \
@@ -754,11 +752,6 @@ sc_server_configure_tcpip_unknown_address(struct sc_server *server,
if (is_already_tcpip) {
// Nothing to do
LOGI("Device already connected via TCP/IP: %s", serial);
server->serial = strdup(serial);
if (!server->serial) {
LOG_OOM();
return false;
}
return true;
}

View File

@@ -1,59 +0,0 @@
#include "frame_source.h"
void
sc_frame_source_init(struct sc_frame_source *source) {
source->sink_count = 0;
}
void
sc_frame_source_add_sink(struct sc_frame_source *source,
struct sc_frame_sink *sink) {
assert(source->sink_count < SC_FRAME_SOURCE_MAX_SINKS);
assert(sink);
assert(sink->ops);
source->sinks[source->sink_count++] = sink;
}
static void
sc_frame_source_sinks_close_firsts(struct sc_frame_source *source,
unsigned count) {
while (count) {
struct sc_frame_sink *sink = source->sinks[--count];
sink->ops->close(sink);
}
}
bool
sc_frame_source_sinks_open(struct sc_frame_source *source,
const AVCodecContext *ctx) {
assert(source->sink_count);
for (unsigned i = 0; i < source->sink_count; ++i) {
struct sc_frame_sink *sink = source->sinks[i];
if (!sink->ops->open(sink, ctx)) {
sc_frame_source_sinks_close_firsts(source, i);
return false;
}
}
return true;
}
void
sc_frame_source_sinks_close(struct sc_frame_source *source) {
assert(source->sink_count);
sc_frame_source_sinks_close_firsts(source, source->sink_count);
}
bool
sc_frame_source_sinks_push(struct sc_frame_source *source,
const AVFrame *frame) {
assert(source->sink_count);
for (unsigned i = 0; i < source->sink_count; ++i) {
struct sc_frame_sink *sink = source->sinks[i];
if (!sink->ops->push(sink, frame)) {
return false;
}
}
return true;
}

View File

@@ -1,38 +0,0 @@
#ifndef SC_FRAME_SOURCE_H
#define SC_FRAME_SOURCE_H
#include "common.h"
#include "frame_sink.h"
#define SC_FRAME_SOURCE_MAX_SINKS 2
/**
* Frame source trait
*
* Component able to send AVFrames should implement this trait.
*/
struct sc_frame_source {
struct sc_frame_sink *sinks[SC_FRAME_SOURCE_MAX_SINKS];
unsigned sink_count;
};
void
sc_frame_source_init(struct sc_frame_source *source);
void
sc_frame_source_add_sink(struct sc_frame_source *source,
struct sc_frame_sink *sink);
bool
sc_frame_source_sinks_open(struct sc_frame_source *source,
const AVCodecContext *ctx);
void
sc_frame_source_sinks_close(struct sc_frame_source *source);
bool
sc_frame_source_sinks_push(struct sc_frame_source *source,
const AVFrame *frame);
#endif

View File

@@ -1,70 +0,0 @@
#include "packet_source.h"
void
sc_packet_source_init(struct sc_packet_source *source) {
source->sink_count = 0;
}
void
sc_packet_source_add_sink(struct sc_packet_source *source,
struct sc_packet_sink *sink) {
assert(source->sink_count < SC_PACKET_SOURCE_MAX_SINKS);
assert(sink);
assert(sink->ops);
source->sinks[source->sink_count++] = sink;
}
static void
sc_packet_source_sinks_close_firsts(struct sc_packet_source *source,
unsigned count) {
while (count) {
struct sc_packet_sink *sink = source->sinks[--count];
sink->ops->close(sink);
}
}
bool
sc_packet_source_sinks_open(struct sc_packet_source *source,
const AVCodec *codec) {
assert(source->sink_count);
for (unsigned i = 0; i < source->sink_count; ++i) {
struct sc_packet_sink *sink = source->sinks[i];
if (!sink->ops->open(sink, codec)) {
sc_packet_source_sinks_close_firsts(source, i);
return false;
}
}
return true;
}
void
sc_packet_source_sinks_close(struct sc_packet_source *source) {
assert(source->sink_count);
sc_packet_source_sinks_close_firsts(source, source->sink_count);
}
bool
sc_packet_source_sinks_push(struct sc_packet_source *source,
const AVPacket *packet) {
assert(source->sink_count);
for (unsigned i = 0; i < source->sink_count; ++i) {
struct sc_packet_sink *sink = source->sinks[i];
if (!sink->ops->push(sink, packet)) {
return false;
}
}
return true;
}
void
sc_packet_source_sinks_disable(struct sc_packet_source *source) {
assert(source->sink_count);
for (unsigned i = 0; i < source->sink_count; ++i) {
struct sc_packet_sink *sink = source->sinks[i];
if (sink->ops->disable) {
sink->ops->disable(sink);
}
}
}

View File

@@ -1,41 +0,0 @@
#ifndef SC_PACKET_SOURCE_H
#define SC_PACKET_SOURCE_H
#include "common.h"
#include "packet_sink.h"
#define SC_PACKET_SOURCE_MAX_SINKS 2
/**
* Packet source trait
*
* Component able to send AVPackets should implement this trait.
*/
struct sc_packet_source {
struct sc_packet_sink *sinks[SC_PACKET_SOURCE_MAX_SINKS];
unsigned sink_count;
};
void
sc_packet_source_init(struct sc_packet_source *source);
void
sc_packet_source_add_sink(struct sc_packet_source *source,
struct sc_packet_sink *sink);
bool
sc_packet_source_sinks_open(struct sc_packet_source *source,
const AVCodec *codec);
void
sc_packet_source_sinks_close(struct sc_packet_source *source);
bool
sc_packet_source_sinks_push(struct sc_packet_source *source,
const AVPacket *packet);
void
sc_packet_source_sinks_disable(struct sc_packet_source *source);
#endif

View File

@@ -14,8 +14,6 @@
#define DEFAULT_TIMEOUT 1000
#define SC_HID_EVENT_QUEUE_MAX 64
static void
sc_hid_event_log(const struct sc_hid_event *event) {
// HID Event: [00] FF FF FF FF...
@@ -50,20 +48,14 @@ sc_hid_event_destroy(struct sc_hid_event *hid_event) {
bool
sc_aoa_init(struct sc_aoa *aoa, struct sc_usb *usb,
struct sc_acksync *acksync) {
sc_vecdeque_init(&aoa->queue);
if (!sc_vecdeque_reserve(&aoa->queue, SC_HID_EVENT_QUEUE_MAX)) {
return false;
}
cbuf_init(&aoa->queue);
if (!sc_mutex_init(&aoa->mutex)) {
sc_vecdeque_destroy(&aoa->queue);
return false;
}
if (!sc_cond_init(&aoa->event_cond)) {
sc_mutex_destroy(&aoa->mutex);
sc_vecdeque_destroy(&aoa->queue);
return false;
}
@@ -77,10 +69,9 @@ sc_aoa_init(struct sc_aoa *aoa, struct sc_usb *usb,
void
sc_aoa_destroy(struct sc_aoa *aoa) {
// Destroy remaining events
while (!sc_vecdeque_is_empty(&aoa->queue)) {
struct sc_hid_event *event = sc_vecdeque_popref(&aoa->queue);
assert(event);
sc_hid_event_destroy(event);
struct sc_hid_event event;
while (cbuf_take(&aoa->queue, &event)) {
sc_hid_event_destroy(&event);
}
sc_cond_destroy(&aoa->event_cond);
@@ -221,19 +212,13 @@ sc_aoa_push_hid_event(struct sc_aoa *aoa, const struct sc_hid_event *event) {
}
sc_mutex_lock(&aoa->mutex);
bool full = sc_vecdeque_is_full(&aoa->queue);
if (!full) {
bool was_empty = sc_vecdeque_is_empty(&aoa->queue);
sc_vecdeque_push_noresize(&aoa->queue, *event);
if (was_empty) {
sc_cond_signal(&aoa->event_cond);
}
bool was_empty = cbuf_is_empty(&aoa->queue);
bool res = cbuf_push(&aoa->queue, *event);
if (was_empty) {
sc_cond_signal(&aoa->event_cond);
}
// Otherwise (if the queue is full), the event is discarded
sc_mutex_unlock(&aoa->mutex);
return !full;
return res;
}
static int
@@ -242,7 +227,7 @@ run_aoa_thread(void *data) {
for (;;) {
sc_mutex_lock(&aoa->mutex);
while (!aoa->stopped && sc_vecdeque_is_empty(&aoa->queue)) {
while (!aoa->stopped && cbuf_is_empty(&aoa->queue)) {
sc_cond_wait(&aoa->event_cond, &aoa->mutex);
}
if (aoa->stopped) {
@@ -250,9 +235,11 @@ run_aoa_thread(void *data) {
sc_mutex_unlock(&aoa->mutex);
break;
}
struct sc_hid_event event;
bool non_empty = cbuf_take(&aoa->queue, &event);
assert(non_empty);
(void) non_empty;
assert(!sc_vecdeque_is_empty(&aoa->queue));
struct sc_hid_event event = sc_vecdeque_pop(&aoa->queue);
uint64_t ack_to_wait = event.ack_to_wait;
sc_mutex_unlock(&aoa->mutex);

View File

@@ -8,9 +8,9 @@
#include "usb.h"
#include "util/acksync.h"
#include "util/cbuf.h"
#include "util/thread.h"
#include "util/tick.h"
#include "util/vecdeque.h"
struct sc_hid_event {
uint16_t accessory_id;
@@ -27,7 +27,7 @@ sc_hid_event_init(struct sc_hid_event *hid_event, uint16_t accessory_id,
void
sc_hid_event_destroy(struct sc_hid_event *hid_event);
struct sc_hid_event_queue SC_VECDEQUE(struct sc_hid_event);
struct sc_hid_event_queue CBUF(struct sc_hid_event, 64);
struct sc_aoa {
struct sc_usb *usb;

View File

@@ -1,26 +0,0 @@
#include "average.h"
#include <assert.h>
void
sc_average_init(struct sc_average *avg, unsigned range) {
avg->range = range;
avg->avg = 0;
avg->count = 0;
}
void
sc_average_push(struct sc_average *avg, float value) {
if (avg->count < avg->range) {
++avg->count;
}
assert(avg->count);
avg->avg = ((avg->count - 1) * avg->avg + value) / avg->count;
}
float
sc_average_get(struct sc_average *avg) {
assert(avg->count);
return avg->avg;
}

View File

@@ -1,40 +0,0 @@
#ifndef SC_AVERAGE
#define SC_AVERAGE
#include "common.h"
#include <stdbool.h>
#include <stdint.h>
struct sc_average {
// Current average value
float avg;
// Target range, to update the average as follow:
// avg = ((range - 1) * avg + new_value) / range
unsigned range;
// Number of values pushed when less than range (count <= range).
// The purpose is to handle the first (range - 1) values properly.
unsigned count;
};
void
sc_average_init(struct sc_average *avg, unsigned range);
/**
* Push a new value to update the "rolling" average
*/
void
sc_average_push(struct sc_average *avg, float value);
/**
* Get the current average value
*
* It is an error to call this function if sc_average_push() has not been
* called at least once.
*/
float
sc_average_get(struct sc_average *avg);
#endif

View File

@@ -1,14 +1,13 @@
#include "bytebuf.h"
#include <assert.h>
#include <stdlib.h>
#include <string.h>
#include "util/log.h"
bool
sc_bytebuf_init(struct sc_bytebuf *buf, size_t alloc_size) {
assert(alloc_size);
// sufficient, but use more for alignment.
buf->data = malloc(alloc_size);
if (!buf->data) {
LOG_OOM();
@@ -30,7 +29,7 @@ sc_bytebuf_destroy(struct sc_bytebuf *buf) {
void
sc_bytebuf_read(struct sc_bytebuf *buf, uint8_t *to, size_t len) {
assert(len);
assert(len <= sc_bytebuf_read_available(buf));
assert(sc_bytebuf_read_remaining(buf) >= len);
assert(buf->tail != buf->head); // the buffer could not be empty
size_t right_limit = buf->tail < buf->head ? buf->head : buf->alloc_size;
@@ -50,40 +49,42 @@ sc_bytebuf_read(struct sc_bytebuf *buf, uint8_t *to, size_t len) {
void
sc_bytebuf_skip(struct sc_bytebuf *buf, size_t len) {
assert(len);
assert(len <= sc_bytebuf_read_available(buf));
assert(sc_bytebuf_read_remaining(buf) >= len);
assert(buf->tail != buf->head); // the buffer could not be empty
buf->tail = (buf->tail + len) % buf->alloc_size;
}
static inline void
sc_bytebuf_write_step0(struct sc_bytebuf *buf, const uint8_t *from,
size_t len) {
size_t right_len = buf->alloc_size - buf->head;
void
sc_bytebuf_write(struct sc_bytebuf *buf, const uint8_t *from, size_t len) {
assert(len);
size_t max_len = buf->alloc_size - 1;
if (len >= max_len) {
// Copy only the right-most bytes
memcpy(buf->data, from + len - max_len, max_len);
buf->tail = 0;
buf->head = max_len;
return;
}
size_t right_limit = buf->head < buf->tail ? buf->tail : buf->alloc_size;
size_t right_len = right_limit - buf->head;
if (len < right_len) {
right_len = len;
}
memcpy(buf->data + buf->head, from, right_len);
if (len > right_len) {
memcpy(buf->data, from + right_len, len - right_len);
}
}
static inline void
sc_bytebuf_write_step1(struct sc_bytebuf *buf, size_t len) {
size_t empty_space = sc_bytebuf_write_remaining(buf);
if (len > empty_space) {
buf->tail = (buf->tail + len - empty_space) % buf->alloc_size;
}
buf->head = (buf->head + len) % buf->alloc_size;
}
void
sc_bytebuf_write(struct sc_bytebuf *buf, const uint8_t *from, size_t len) {
assert(len);
assert(len <= sc_bytebuf_write_available(buf));
sc_bytebuf_write_step0(buf, from, len);
sc_bytebuf_write_step1(buf, len);
}
void
sc_bytebuf_prepare_write(struct sc_bytebuf *buf, const uint8_t *from,
size_t len) {
@@ -94,11 +95,20 @@ sc_bytebuf_prepare_write(struct sc_bytebuf *buf, const uint8_t *from,
// be called with lock held).
assert(len < buf->alloc_size - 1);
sc_bytebuf_write_step0(buf, from, len);
size_t right_len = buf->alloc_size - buf->head;
if (len < right_len) {
right_len = len;
}
memcpy(buf->data + buf->head, from, right_len);
if (len > right_len) {
memcpy(buf->data, from + right_len, len - right_len);
}
}
void
sc_bytebuf_commit_write(struct sc_bytebuf *buf, size_t len) {
assert(len <= sc_bytebuf_write_available(buf));
sc_bytebuf_write_step1(buf, len);
assert(len <= sc_bytebuf_write_remaining(buf));
buf->head = (buf->head + len) % buf->alloc_size;
}

View File

@@ -11,10 +11,10 @@ struct sc_bytebuf {
// The actual capacity is (allocated - 1) so that head == tail is
// non-ambiguous
size_t alloc_size;
size_t head; // writter cursor
size_t tail; // reader cursor
size_t head;
size_t tail;
// empty: tail == head
// full: ((tail + 1) % alloc_size) == head
// full: (tail + 1) % allocated == head
};
bool
@@ -23,10 +23,10 @@ sc_bytebuf_init(struct sc_bytebuf *buf, size_t alloc_size);
/**
* Copy from the bytebuf to a user-provided array
*
* The caller must check that len <= sc_bytebuf_read_available() (it is an
* error to attempt to read more bytes than available).
* The caller must check that len <= buf->len (it is an error to attempt to read
* more bytes than available).
*
* This function is guaranteed not to write to buf->head.
* This function is guaranteed to not change the head.
*/
void
sc_bytebuf_read(struct sc_bytebuf *buf, uint8_t *to, size_t len);
@@ -34,13 +34,13 @@ sc_bytebuf_read(struct sc_bytebuf *buf, uint8_t *to, size_t len);
/**
* Drop len bytes from the buffer
*
* The caller must check that len <= sc_bytebuf_read_available() (it is an
* error to attempt to skip more bytes than available).
* The caller must check that len <= buf->len (it is an error to attempt to skip
* more bytes than available).
*
* This function is guaranteed not to write to buf->head.
* This function is guaranteed to not change the head.
*
* It is equivalent to call sc_bytebuf_read() to some array and discard the
* array (but this function is more efficient since there is no copy).
* array (but more efficient since there is no copy).
*/
void
sc_bytebuf_skip(struct sc_bytebuf *buf, size_t len);
@@ -48,10 +48,9 @@ sc_bytebuf_skip(struct sc_bytebuf *buf, size_t len);
/**
* Copy the user-provided array to the bytebuf
*
* The caller must check that len <= sc_bytebuf_write_available() (it is an
* error to write more bytes than the remaining available space).
*
* This function is guaranteed not to write to buf->tail.
* The length of the input array is not restricted:
* if len >= sc_bytebuf_write_remaining(buf), then the excessive input bytes
* will overwrite the oldest bytes in the buffer.
*/
void
sc_bytebuf_write(struct sc_bytebuf *buf, const uint8_t *from, size_t len);
@@ -59,16 +58,14 @@ sc_bytebuf_write(struct sc_bytebuf *buf, const uint8_t *from, size_t len);
/**
* Copy the user-provided array to the bytebuf, but do not advance the cursor
*
* The caller must check that len <= sc_bytebuf_write_available() (it is an
* error to write more bytes than the remaining available space).
* The caller must check that len <= buf->len (it is an error to attempt to
* write more bytes than available).
*
* After this function is called, the write must be committed with
* sc_bytebuf_commit_write().
*
* The purpose of this mechanism is to acquire a lock only to commit the write,
* but not to perform the actual copy.
*
* This function is guaranteed not to access buf->tail.
*/
void
sc_bytebuf_prepare_write(struct sc_bytebuf *buf, const uint8_t *from,
@@ -86,28 +83,21 @@ sc_bytebuf_commit_write(struct sc_bytebuf *buf, size_t len);
* It is an error to read more bytes than available.
*/
static inline size_t
sc_bytebuf_read_available(struct sc_bytebuf *buf) {
sc_bytebuf_read_remaining(struct sc_bytebuf *buf) {
return (buf->alloc_size + buf->head - buf->tail) % buf->alloc_size;
}
/**
* Return the number of bytes which can be written
* Return the number of bytes which can be written without overwriting
*
* It is an error to write more bytes than available.
* It is not an error to write more bytes than the available space, but this
* would overwrite the oldest bytes in the buffer.
*/
static inline size_t
sc_bytebuf_write_available(struct sc_bytebuf *buf) {
sc_bytebuf_write_remaining(struct sc_bytebuf *buf) {
return (buf->alloc_size + buf->tail - buf->head - 1) % buf->alloc_size;
}
/**
* Return the actual capacity of the buffer (read available + write available)
*/
static inline size_t
sc_bytebuf_capacity(struct sc_bytebuf *buf) {
return buf->alloc_size - 1;
}
void
sc_bytebuf_destroy(struct sc_bytebuf *buf);

52
app/src/util/cbuf.h Normal file
View File

@@ -0,0 +1,52 @@
// generic circular buffer (bounded queue) implementation
#ifndef SC_CBUF_H
#define SC_CBUF_H
#include "common.h"
#include <stdbool.h>
#include <unistd.h>
// To define a circular buffer type of 20 ints:
// struct cbuf_int CBUF(int, 20);
//
// data has length CAP + 1 to distinguish empty vs full.
#define CBUF(TYPE, CAP) { \
TYPE data[(CAP) + 1]; \
size_t head; \
size_t tail; \
}
#define cbuf_size_(PCBUF) \
(sizeof((PCBUF)->data) / sizeof(*(PCBUF)->data))
#define cbuf_is_empty(PCBUF) \
((PCBUF)->head == (PCBUF)->tail)
#define cbuf_is_full(PCBUF) \
(((PCBUF)->head + 1) % cbuf_size_(PCBUF) == (PCBUF)->tail)
#define cbuf_init(PCBUF) \
(void) ((PCBUF)->head = (PCBUF)->tail = 0)
#define cbuf_push(PCBUF, ITEM) \
({ \
bool ok = !cbuf_is_full(PCBUF); \
if (ok) { \
(PCBUF)->data[(PCBUF)->head] = (ITEM); \
(PCBUF)->head = ((PCBUF)->head + 1) % cbuf_size_(PCBUF); \
} \
ok; \
})
#define cbuf_take(PCBUF, PITEM) \
({ \
bool ok = !cbuf_is_empty(PCBUF); \
if (ok) { \
*(PITEM) = (PCBUF)->data[(PCBUF)->tail]; \
(PCBUF)->tail = ((PCBUF)->tail + 1) % cbuf_size_(PCBUF); \
} \
ok; \
})
#endif

View File

@@ -125,30 +125,8 @@ sc_av_log_callback(void *avcl, int level, const char *fmt, va_list vl) {
free(local_fmt);
}
static const char *const sc_sdl_log_priority_names[SDL_NUM_LOG_PRIORITIES] = {
[SDL_LOG_PRIORITY_VERBOSE] = "VERBOSE",
[SDL_LOG_PRIORITY_DEBUG] = "DEBUG",
[SDL_LOG_PRIORITY_INFO] = "INFO",
[SDL_LOG_PRIORITY_WARN] = "WARN",
[SDL_LOG_PRIORITY_ERROR] = "ERROR",
[SDL_LOG_PRIORITY_CRITICAL] = "CRITICAL",
};
static void SDLCALL
sc_sdl_log_print(void *userdata, int category, SDL_LogPriority priority,
const char *message) {
(void) userdata;
(void) category;
FILE *out = priority < SDL_LOG_PRIORITY_WARN ? stdout : stderr;
assert(priority < SDL_NUM_LOG_PRIORITIES);
const char *prio_name = sc_sdl_log_priority_names[priority];
fprintf(out, "%s: %s\n", prio_name, message);
}
void
sc_log_configure() {
SDL_LogSetOutputFunction(sc_sdl_log_print, NULL);
// Redirect FFmpeg logs to SDL logs
av_log_set_callback(sc_av_log_callback);
}

View File

@@ -1,14 +0,0 @@
#include "memory.h"
#include <stdlib.h>
#include <errno.h>
void *
sc_allocarray(size_t nmemb, size_t size) {
size_t bytes;
if (__builtin_mul_overflow(nmemb, size, &bytes)) {
errno = ENOMEM;
return NULL;
}
return malloc(bytes);
}

View File

@@ -1,15 +0,0 @@
#ifndef SC_MEMORY_H
#define SC_MEMORY_H
#include <stddef.h>
/**
* Allocate an array of `nmemb` items of `size` bytes each
*
* Like calloc(), but without initialization.
* Like reallocarray(), but without reallocation.
*/
void *
sc_allocarray(size_t nmemb, size_t size);
#endif

View File

@@ -30,8 +30,8 @@ bool
net_init(void) {
#ifdef _WIN32
WSADATA wsa;
int res = WSAStartup(MAKEWORD(1, 1), &wsa);
if (res) {
int res = WSAStartup(MAKEWORD(2, 2), &wsa) < 0;
if (res < 0) {
LOGE("WSAStartup failed with error %d", res);
return false;
}

77
app/src/util/queue.h Normal file
View File

@@ -0,0 +1,77 @@
// generic intrusive FIFO queue
#ifndef SC_QUEUE_H
#define SC_QUEUE_H
#include "common.h"
#include <assert.h>
#include <stdbool.h>
#include <stddef.h>
// To define a queue type of "struct foo":
// struct queue_foo QUEUE(struct foo);
#define SC_QUEUE(TYPE) { \
TYPE *first; \
TYPE *last; \
}
#define sc_queue_init(PQ) \
(void) ((PQ)->first = (PQ)->last = NULL)
#define sc_queue_is_empty(PQ) \
!(PQ)->first
// NEXTFIELD is the field in the ITEM type used for intrusive linked-list
//
// For example:
// struct foo {
// int value;
// struct foo *next;
// };
//
// // define the type "struct my_queue"
// struct my_queue SC_QUEUE(struct foo);
//
// struct my_queue queue;
// sc_queue_init(&queue);
//
// struct foo v1 = { .value = 42 };
// struct foo v2 = { .value = 27 };
//
// sc_queue_push(&queue, next, v1);
// sc_queue_push(&queue, next, v2);
//
// struct foo *foo;
// sc_queue_take(&queue, next, &foo);
// assert(foo->value == 42);
// sc_queue_take(&queue, next, &foo);
// assert(foo->value == 27);
// assert(sc_queue_is_empty(&queue));
//
// push a new item into the queue
#define sc_queue_push(PQ, NEXTFIELD, ITEM) \
(void) ({ \
(ITEM)->NEXTFIELD = NULL; \
if (sc_queue_is_empty(PQ)) { \
(PQ)->first = (PQ)->last = (ITEM); \
} else { \
(PQ)->last->NEXTFIELD = (ITEM); \
(PQ)->last = (ITEM); \
} \
})
// take the next item and remove it from the queue (the queue must not be empty)
// the result is stored in *(PITEM)
// (without typeof(), we could not store a local variable having the correct
// type so that we can "return" it)
#define sc_queue_take(PQ, NEXTFIELD, PITEM) \
(void) ({ \
assert(!sc_queue_is_empty(PQ)); \
*(PITEM) = (PQ)->first; \
(PQ)->first = (PQ)->first->NEXTFIELD; \
})
// no need to update (PQ)->last if the queue is left empty:
// (PQ)->last is undefined if !(PQ)->first anyway
#endif

View File

@@ -23,39 +23,6 @@ sc_thread_create(sc_thread *thread, sc_thread_fn fn, const char *name,
return true;
}
static SDL_ThreadPriority
to_sdl_thread_priority(enum sc_thread_priority priority) {
switch (priority) {
case SC_THREAD_PRIORITY_TIME_CRITICAL:
#ifdef SCRCPY_SDL_HAS_THREAD_PRIORITY_TIME_CRITICAL
return SDL_THREAD_PRIORITY_TIME_CRITICAL;
#else
// fall through
#endif
case SC_THREAD_PRIORITY_HIGH:
return SDL_THREAD_PRIORITY_HIGH;
case SC_THREAD_PRIORITY_NORMAL:
return SDL_THREAD_PRIORITY_NORMAL;
case SC_THREAD_PRIORITY_LOW:
return SDL_THREAD_PRIORITY_LOW;
default:
assert(!"Unknown thread priority");
return 0;
}
}
bool
sc_thread_set_priority(enum sc_thread_priority priority) {
SDL_ThreadPriority sdl_priority = to_sdl_thread_priority(priority);
int r = SDL_SetThreadPriority(sdl_priority);
if (r) {
LOGD("Could not set thread priority: %s", SDL_GetError());
return false;
}
return true;
}
void
sc_thread_join(sc_thread *thread, int *status) {
SDL_WaitThread(thread->thread, status);

View File

@@ -21,13 +21,6 @@ typedef struct sc_thread {
SDL_Thread *thread;
} sc_thread;
enum sc_thread_priority {
SC_THREAD_PRIORITY_LOW,
SC_THREAD_PRIORITY_NORMAL,
SC_THREAD_PRIORITY_HIGH,
SC_THREAD_PRIORITY_TIME_CRITICAL,
};
typedef struct sc_mutex {
SDL_mutex *mutex;
#ifndef NDEBUG
@@ -46,9 +39,6 @@ sc_thread_create(sc_thread *thread, sc_thread_fn fn, const char *name,
void
sc_thread_join(sc_thread *thread, int *status);
bool
sc_thread_set_priority(enum sc_thread_priority priority);
bool
sc_mutex_init(sc_mutex *mutex);

View File

@@ -1,379 +0,0 @@
#ifndef SC_VECDEQUE_H
#define SC_VECDEQUE_H
#include "common.h"
#include <assert.h>
#include <stdbool.h>
#include <stddef.h>
#include <stdlib.h>
#include <string.h>
#include "util/memory.h"
/**
* A double-ended queue implemented with a growable ring buffer.
*
* Inspired from the Rust VecDeque type:
* <https://doc.rust-lang.org/std/collections/struct.VecDeque.html>
*/
/**
* VecDeque struct body
*
* A VecDeque is a dynamic ring-buffer, managed by the sc_vecdeque_* helpers.
*
* It is generic over the type of its items, so it is implemented via macros.
*
* To use a VecDeque, a new type must be defined:
*
* struct vecdeque_int SC_VECDEQUE(int);
*
* The struct may be anonymous:
*
* struct SC_VECDEQUE(const char *) names;
*
* Functions and macros having name ending with '_' are private.
*/
#define SC_VECDEQUE(type) { \
size_t cap; \
size_t origin; \
size_t size; \
type *data; \
}
/**
* Static initializer for a VecDeque
*/
#define SC_VECDEQUE_INITIALIZER { 0, 0, 0, NULL }
/**
* Initialize an empty VecDeque
*/
#define sc_vecdeque_init(pv) \
({ \
(pv)->cap = 0; \
(pv)->origin = 0; \
(pv)->size = 0; \
(pv)->data = NULL; \
})
/**
* Destroy a VecDeque
*/
#define sc_vecdeque_destroy(pv) \
free((pv)->data)
/**
* Clear a VecDeque
*
* Remove all items.
*/
#define sc_vecdeque_clear(pv) \
(void) ({ \
sc_vecdeque_destroy(pv); \
sc_vecdeque_init(pv); \
})
/**
* Returns the content size
*/
#define sc_vecdeque_size(pv) \
(pv)->size
/**
* Return whether the VecDeque is empty (i.e. its size is 0)
*/
#define sc_vecdeque_is_empty(pv) \
((pv)->size == 0)
/**
* Return whether the VecDeque is full
*
* A VecDeque is full when its size equals its current capacity. However, it
* does not prevent to push a new item (with sc_vecdeque_push()), since this
* will increase its capacity.
*/
#define sc_vecdeque_is_full(pv) \
((pv)->size == (pv)->cap)
/**
* The minimal allocation size, in number of items
*
* Private.
*/
#define SC_VECDEQUE_MINCAP_ ((size_t) 10)
/**
* The maximal allocation size, in number of items
*
* Use SIZE_MAX/2 to fit in ssize_t, and so that cap*1.5 does not overflow.
*
* Private.
*/
#define sc_vecdeque_max_cap_(pv) (SIZE_MAX / 2 / sizeof(*(pv)->data))
/**
* Realloc the internal array to a specific capacity
*
* On reallocation success, update the VecDeque capacity (`*pcap`) and origin
* (`*porigin`), and return the reallocated data.
*
* On reallocation failure, return NULL without any change.
*
* Private.
*
* \param ptr the current `data` field of the SC_VECDEQUE to realloc
* \param newcap the requested capacity, in number of items
* \param item_size the size of one item (the generic type is unknown from this
* function)
* \param pcap a pointer to the `cap` field of the SC_VECDEQUE [IN/OUT]
* \param porigin a pointer to pv->origin [IN/OUT]
* \param size the `size` field of the SC_VECDEQUE
* \return the new array to assign to the `data` field of the SC_VECDEQUE (if
* not NULL)
*/
static inline void *
sc_vecdeque_reallocdata_(void *ptr, size_t newcap, size_t item_size,
size_t *pcap, size_t *porigin, size_t size) {
size_t oldcap = *pcap;
size_t oldorigin = *porigin;
assert(newcap > oldcap); // Could only grow
if (oldorigin + size <= oldcap) {
// The current content will stay in place, just realloc
//
// As an example, here is the content of a ring-buffer (oldcap=10)
// before the realloc:
//
// _ _ 2 3 4 5 6 7 _ _
// ^
// origin
//
// It is resized (newcap=15), e.g. with sc_vecdeque_reserve():
//
// _ _ 2 3 4 5 6 7 _ _ _ _ _ _ _
// ^
// origin
void *newptr = reallocarray(ptr, newcap, item_size);
if (!newptr) {
return NULL;
}
*pcap = newcap;
return newptr;
}
// Copy the current content to the new array
//
// As an example, here is the content of a ring-buffer (oldcap=10) before
// the realloc:
//
// 5 6 7 _ _ 0 1 2 3 4
// ^
// origin
//
// It is resized (newcap=15), e.g. with sc_vecdeque_reserve():
//
// 0 1 2 3 4 5 6 7 _ _ _ _ _ _ _
// ^
// origin
assert(size);
void *newptr = sc_allocarray(newcap, item_size);
if (!newptr) {
return NULL;
}
size_t right_len = MIN(size, oldcap - oldorigin);
assert(right_len);
memcpy(newptr, ptr + (oldorigin * item_size), right_len * item_size);
if (size > right_len) {
memcpy(newptr + (right_len * item_size), ptr,
(size - right_len) * item_size);
}
free(ptr);
*pcap = newcap;
*porigin = 0;
return newptr;
}
/**
* Macro to realloc the internal data to a new capacity
*
* Private.
*
* \retval true on success
* \retval false on allocation failure (the VecDeque is left untouched)
*/
#define sc_vecdeque_realloc_(pv, newcap) \
({ \
void *p = sc_vecdeque_reallocdata_((pv)->data, newcap, \
sizeof(*(pv)->data), &(pv)->cap, \
&(pv)->origin, (pv)->size); \
if (p) { \
(pv)->data = p; \
} \
(bool) p; \
});
static inline size_t
sc_vecdeque_growsize_(size_t value)
{
/* integer multiplication by 1.5 */
return value + (value >> 1);
}
/**
* Increase the capacity of the VecDeque to at least `mincap`
*
* \param pv a pointer to the VecDeque
* \param mincap (`size_t`) the requested capacity
* \retval true on success
* \retval false on allocation failure (the VecDeque is left untouched)
*/
#define sc_vecdeque_reserve(pv, mincap) \
({ \
assert(mincap <= sc_vecdeque_max_cap_(pv)); \
bool ok; \
/* avoid to allocate tiny arrays (< SC_VECDEQUE_MINCAP_) */ \
size_t mincap_ = MAX(mincap, SC_VECDEQUE_MINCAP_); \
if (mincap_ <= (pv)->cap) { \
/* nothing to do */ \
ok = true; \
} else if (mincap_ <= sc_vecdeque_max_cap_(pv)) { \
/* not too big */ \
size_t newsize = sc_vecdeque_growsize_((pv)->cap); \
newsize = CLAMP(newsize, mincap_, sc_vecdeque_max_cap_(pv)); \
ok = sc_vecdeque_realloc_(pv, newsize); \
} else { \
ok = false; \
} \
ok; \
})
/**
* Automatically grow the VecDeque capacity
*
* Private.
*
* \retval true on success
* \retval false on allocation failure (the VecDeque is left untouched)
*/
#define sc_vecdeque_grow_(pv) \
({ \
bool ok; \
if ((pv)->cap < sc_vecdeque_max_cap_(pv)) { \
size_t newsize = sc_vecdeque_growsize_((pv)->cap); \
newsize = CLAMP(newsize, SC_VECDEQUE_MINCAP_, \
sc_vecdeque_max_cap_(pv)); \
ok = sc_vecdeque_realloc_(pv, newsize); \
} else { \
ok = false; \
} \
ok; \
})
/**
* Grow the VecDeque capacity if it is full
*
* Private.
*
* \retval true on success
* \retval false on allocation failure (the VecDeque is left untouched)
*/
#define sc_vecdeque_grow_if_needed_(pv) \
(!sc_vecdeque_is_full(pv) || sc_vecdeque_grow_(pv))
/**
* Push an uninitialized item, and return a pointer to it
*
* It does not attempt to resize the VecDeque. It is an error to this function
* if the VecDeque is full.
*
* This function may not fail. It returns a valid non-NULL pointer to the
* uninitialized item just pushed.
*/
#define sc_vecdeque_push_hole_noresize(pv) \
({ \
assert(!sc_vecdeque_is_full(pv)); \
++(pv)->size; \
&(pv)->data[((pv)->origin + (pv)->size - 1) % (pv)->cap]; \
})
/**
* Push an uninitialized item, and return a pointer to it
*
* If the VecDeque is full, it is resized.
*
* This function returns either a valid non-NULL pointer to the uninitialized
* item just pushed, or NULL on reallocation failure.
*/
#define sc_vecdeque_push_hole(pv) \
(sc_vecdeque_grow_if_needed_(pv) ? \
sc_vecdeque_push_hole_noresize(pv) : NULL)
/**
* Push an item
*
* It does not attempt to resize the VecDeque. It is an error to this function
* if the VecDeque is full.
*
* This function may not fail.
*/
#define sc_vecdeque_push_noresize(pv, item) \
(void) ({ \
assert(!sc_vecdeque_is_full(pv)); \
++(pv)->size; \
(pv)->data[((pv)->origin + (pv)->size - 1) % (pv)->cap] = item; \
})
/**
* Push an item
*
* If the VecDeque is full, it is resized.
*
* \retval true on success
* \retval false on allocation failure (the VecDeque is left untouched)
*/
#define sc_vecdeque_push(pv, item) \
({ \
bool ok = sc_vecdeque_grow_if_needed_(pv); \
if (ok) { \
sc_vecdeque_push_noresize(pv, item); \
} \
ok; \
})
/**
* Pop an item and return a pointer to it (still in the VecDeque)
*
* Returning a pointer allows the caller to destroy it in place without copy
* (especially if the item type is big).
*
* It is an error to call this function if the VecDeque is empty.
*/
#define sc_vecdeque_popref(pv) \
({ \
assert(!sc_vecdeque_is_empty(pv)); \
size_t pos = (pv)->origin; \
(pv)->origin = ((pv)->origin + 1) % (pv)->cap; \
--(pv)->size; \
&(pv)->data[pos]; \
})
/**
* Pop an item and return it
*
* It is an error to call this function if the VecDeque is empty.
*/
#define sc_vecdeque_pop(pv) \
(*sc_vecdeque_popref(pv))
#endif

View File

@@ -118,7 +118,7 @@ static inline void *
sc_vector_reallocdata_(void *ptr, size_t count, size_t size,
size_t *restrict pcap, size_t *restrict psize)
{
void *p = reallocarray(ptr, count, size);
void *p = realloc(ptr, count * size);
if (!p) {
return NULL;
}

View File

@@ -126,7 +126,7 @@ run_v4l2_sink(void *data) {
vs->has_frame = false;
sc_mutex_unlock(&vs->mutex);
sc_frame_buffer_consume(&vs->fb, vs->frame);
sc_video_buffer_consume(&vs->vb, vs->frame);
bool ok = encode_and_write_frame(vs, vs->frame);
av_frame_unref(vs->frame);
@@ -141,19 +141,41 @@ run_v4l2_sink(void *data) {
return 0;
}
static void
sc_video_buffer_on_new_frame(struct sc_video_buffer *vb, bool previous_skipped,
void *userdata) {
(void) vb;
struct sc_v4l2_sink *vs = userdata;
if (!previous_skipped) {
sc_mutex_lock(&vs->mutex);
vs->has_frame = true;
sc_cond_signal(&vs->cond);
sc_mutex_unlock(&vs->mutex);
}
}
static bool
sc_v4l2_sink_open(struct sc_v4l2_sink *vs, const AVCodecContext *ctx) {
assert(ctx->pix_fmt == AV_PIX_FMT_YUV420P);
(void) ctx;
bool ok = sc_frame_buffer_init(&vs->fb);
static const struct sc_video_buffer_callbacks cbs = {
.on_new_frame = sc_video_buffer_on_new_frame,
};
bool ok = sc_video_buffer_init(&vs->vb, vs->buffering_time, &cbs, vs);
if (!ok) {
return false;
}
ok = sc_video_buffer_start(&vs->vb);
if (!ok) {
goto error_video_buffer_destroy;
}
ok = sc_mutex_init(&vs->mutex);
if (!ok) {
goto error_frame_buffer_destroy;
goto error_video_buffer_stop_and_join;
}
ok = sc_cond_init(&vs->cond);
@@ -278,8 +300,11 @@ error_cond_destroy:
sc_cond_destroy(&vs->cond);
error_mutex_destroy:
sc_mutex_destroy(&vs->mutex);
error_frame_buffer_destroy:
sc_frame_buffer_destroy(&vs->fb);
error_video_buffer_stop_and_join:
sc_video_buffer_stop(&vs->vb);
sc_video_buffer_join(&vs->vb);
error_video_buffer_destroy:
sc_video_buffer_destroy(&vs->vb);
return false;
}
@@ -291,7 +316,10 @@ sc_v4l2_sink_close(struct sc_v4l2_sink *vs) {
sc_cond_signal(&vs->cond);
sc_mutex_unlock(&vs->mutex);
sc_video_buffer_stop(&vs->vb);
sc_thread_join(&vs->thread, NULL);
sc_video_buffer_join(&vs->vb);
av_packet_free(&vs->packet);
av_frame_free(&vs->frame);
@@ -301,25 +329,12 @@ sc_v4l2_sink_close(struct sc_v4l2_sink *vs) {
avformat_free_context(vs->format_ctx);
sc_cond_destroy(&vs->cond);
sc_mutex_destroy(&vs->mutex);
sc_frame_buffer_destroy(&vs->fb);
sc_video_buffer_destroy(&vs->vb);
}
static bool
sc_v4l2_sink_push(struct sc_v4l2_sink *vs, const AVFrame *frame) {
bool previous_skipped;
bool ok = sc_frame_buffer_push(&vs->fb, frame, &previous_skipped);
if (!ok) {
return false;
}
if (!previous_skipped) {
sc_mutex_lock(&vs->mutex);
vs->has_frame = true;
sc_cond_signal(&vs->cond);
sc_mutex_unlock(&vs->mutex);
}
return true;
return sc_video_buffer_push(&vs->vb, frame);
}
static bool
@@ -342,7 +357,7 @@ sc_v4l2_frame_sink_push(struct sc_frame_sink *sink, const AVFrame *frame) {
bool
sc_v4l2_sink_init(struct sc_v4l2_sink *vs, const char *device_name,
struct sc_size frame_size) {
struct sc_size frame_size, sc_tick buffering_time) {
vs->device_name = strdup(device_name);
if (!vs->device_name) {
LOGE("Could not strdup v4l2 device name");
@@ -350,6 +365,7 @@ sc_v4l2_sink_init(struct sc_v4l2_sink *vs, const char *device_name,
}
vs->frame_size = frame_size;
vs->buffering_time = buffering_time;
static const struct sc_frame_sink_ops ops = {
.open = sc_v4l2_frame_sink_open,

View File

@@ -8,18 +8,19 @@
#include "coords.h"
#include "trait/frame_sink.h"
#include "frame_buffer.h"
#include "video_buffer.h"
#include "util/tick.h"
struct sc_v4l2_sink {
struct sc_frame_sink frame_sink; // frame sink trait
struct sc_frame_buffer fb;
struct sc_video_buffer vb;
AVFormatContext *format_ctx;
AVCodecContext *encoder_ctx;
char *device_name;
struct sc_size frame_size;
sc_tick buffering_time;
sc_thread thread;
sc_mutex mutex;
@@ -34,7 +35,7 @@ struct sc_v4l2_sink {
bool
sc_v4l2_sink_init(struct sc_v4l2_sink *vs, const char *device_name,
struct sc_size frame_size);
struct sc_size frame_size, sc_tick buffering_time);
void
sc_v4l2_sink_destroy(struct sc_v4l2_sink *vs);

254
app/src/video_buffer.c Normal file
View File

@@ -0,0 +1,254 @@
#include "video_buffer.h"
#include <assert.h>
#include <stdlib.h>
#include <libavutil/avutil.h>
#include <libavformat/avformat.h>
#include "util/log.h"
#define SC_BUFFERING_NDEBUG // comment to debug
static struct sc_video_buffer_frame *
sc_video_buffer_frame_new(const AVFrame *frame) {
struct sc_video_buffer_frame *vb_frame = malloc(sizeof(*vb_frame));
if (!vb_frame) {
LOG_OOM();
return NULL;
}
vb_frame->frame = av_frame_alloc();
if (!vb_frame->frame) {
LOG_OOM();
free(vb_frame);
return NULL;
}
if (av_frame_ref(vb_frame->frame, frame)) {
av_frame_free(&vb_frame->frame);
free(vb_frame);
return NULL;
}
return vb_frame;
}
static void
sc_video_buffer_frame_delete(struct sc_video_buffer_frame *vb_frame) {
av_frame_unref(vb_frame->frame);
av_frame_free(&vb_frame->frame);
free(vb_frame);
}
static bool
sc_video_buffer_offer(struct sc_video_buffer *vb, const AVFrame *frame) {
bool previous_skipped;
bool ok = sc_frame_buffer_push(&vb->fb, frame, &previous_skipped);
if (!ok) {
return false;
}
vb->cbs->on_new_frame(vb, previous_skipped, vb->cbs_userdata);
return true;
}
static int
run_buffering(void *data) {
struct sc_video_buffer *vb = data;
assert(vb->buffering_time > 0);
for (;;) {
sc_mutex_lock(&vb->b.mutex);
while (!vb->b.stopped && sc_queue_is_empty(&vb->b.queue)) {
sc_cond_wait(&vb->b.queue_cond, &vb->b.mutex);
}
if (vb->b.stopped) {
sc_mutex_unlock(&vb->b.mutex);
goto stopped;
}
struct sc_video_buffer_frame *vb_frame;
sc_queue_take(&vb->b.queue, next, &vb_frame);
sc_tick max_deadline = sc_tick_now() + vb->buffering_time;
// PTS (written by the server) are expressed in microseconds
sc_tick pts = SC_TICK_TO_US(vb_frame->frame->pts);
bool timed_out = false;
while (!vb->b.stopped && !timed_out) {
sc_tick deadline = sc_clock_to_system_time(&vb->b.clock, pts)
+ vb->buffering_time;
if (deadline > max_deadline) {
deadline = max_deadline;
}
timed_out =
!sc_cond_timedwait(&vb->b.wait_cond, &vb->b.mutex, deadline);
}
if (vb->b.stopped) {
sc_video_buffer_frame_delete(vb_frame);
sc_mutex_unlock(&vb->b.mutex);
goto stopped;
}
sc_mutex_unlock(&vb->b.mutex);
#ifndef SC_BUFFERING_NDEBUG
LOGD("Buffering: %" PRItick ";%" PRItick ";%" PRItick,
pts, vb_frame->push_date, sc_tick_now());
#endif
sc_video_buffer_offer(vb, vb_frame->frame);
sc_video_buffer_frame_delete(vb_frame);
}
stopped:
// Flush queue
while (!sc_queue_is_empty(&vb->b.queue)) {
struct sc_video_buffer_frame *vb_frame;
sc_queue_take(&vb->b.queue, next, &vb_frame);
sc_video_buffer_frame_delete(vb_frame);
}
LOGD("Buffering thread ended");
return 0;
}
bool
sc_video_buffer_init(struct sc_video_buffer *vb, sc_tick buffering_time,
const struct sc_video_buffer_callbacks *cbs,
void *cbs_userdata) {
bool ok = sc_frame_buffer_init(&vb->fb);
if (!ok) {
return false;
}
assert(buffering_time >= 0);
if (buffering_time) {
ok = sc_mutex_init(&vb->b.mutex);
if (!ok) {
sc_frame_buffer_destroy(&vb->fb);
return false;
}
ok = sc_cond_init(&vb->b.queue_cond);
if (!ok) {
sc_mutex_destroy(&vb->b.mutex);
sc_frame_buffer_destroy(&vb->fb);
return false;
}
ok = sc_cond_init(&vb->b.wait_cond);
if (!ok) {
sc_cond_destroy(&vb->b.queue_cond);
sc_mutex_destroy(&vb->b.mutex);
sc_frame_buffer_destroy(&vb->fb);
return false;
}
sc_clock_init(&vb->b.clock);
sc_queue_init(&vb->b.queue);
}
assert(cbs);
assert(cbs->on_new_frame);
vb->buffering_time = buffering_time;
vb->cbs = cbs;
vb->cbs_userdata = cbs_userdata;
return true;
}
bool
sc_video_buffer_start(struct sc_video_buffer *vb) {
if (vb->buffering_time) {
bool ok =
sc_thread_create(&vb->b.thread, run_buffering, "scrcpy-vbuf", vb);
if (!ok) {
LOGE("Could not start buffering thread");
return false;
}
}
return true;
}
void
sc_video_buffer_stop(struct sc_video_buffer *vb) {
if (vb->buffering_time) {
sc_mutex_lock(&vb->b.mutex);
vb->b.stopped = true;
sc_cond_signal(&vb->b.queue_cond);
sc_cond_signal(&vb->b.wait_cond);
sc_mutex_unlock(&vb->b.mutex);
}
}
void
sc_video_buffer_join(struct sc_video_buffer *vb) {
if (vb->buffering_time) {
sc_thread_join(&vb->b.thread, NULL);
}
}
void
sc_video_buffer_destroy(struct sc_video_buffer *vb) {
sc_frame_buffer_destroy(&vb->fb);
if (vb->buffering_time) {
sc_cond_destroy(&vb->b.wait_cond);
sc_cond_destroy(&vb->b.queue_cond);
sc_mutex_destroy(&vb->b.mutex);
}
}
bool
sc_video_buffer_push(struct sc_video_buffer *vb, const AVFrame *frame) {
if (!vb->buffering_time) {
// No buffering
return sc_video_buffer_offer(vb, frame);
}
sc_mutex_lock(&vb->b.mutex);
sc_tick pts = SC_TICK_FROM_US(frame->pts);
sc_clock_update(&vb->b.clock, sc_tick_now(), pts);
sc_cond_signal(&vb->b.wait_cond);
if (vb->b.clock.count == 1) {
sc_mutex_unlock(&vb->b.mutex);
// First frame, offer it immediately, for two reasons:
// - not to delay the opening of the scrcpy window
// - the buffering estimation needs at least two clock points, so it
// could not handle the first frame
return sc_video_buffer_offer(vb, frame);
}
struct sc_video_buffer_frame *vb_frame = sc_video_buffer_frame_new(frame);
if (!vb_frame) {
sc_mutex_unlock(&vb->b.mutex);
LOG_OOM();
return false;
}
#ifndef SC_BUFFERING_NDEBUG
vb_frame->push_date = sc_tick_now();
#endif
sc_queue_push(&vb->b.queue, next, vb_frame);
sc_cond_signal(&vb->b.queue_cond);
sc_mutex_unlock(&vb->b.mutex);
return true;
}
void
sc_video_buffer_consume(struct sc_video_buffer *vb, AVFrame *dst) {
sc_frame_buffer_consume(&vb->fb, dst);
}

76
app/src/video_buffer.h Normal file
View File

@@ -0,0 +1,76 @@
#ifndef SC_VIDEO_BUFFER_H
#define SC_VIDEO_BUFFER_H
#include "common.h"
#include <stdbool.h>
#include "clock.h"
#include "frame_buffer.h"
#include "util/queue.h"
#include "util/thread.h"
#include "util/tick.h"
// forward declarations
typedef struct AVFrame AVFrame;
struct sc_video_buffer_frame {
AVFrame *frame;
struct sc_video_buffer_frame *next;
#ifndef NDEBUG
sc_tick push_date;
#endif
};
struct sc_video_buffer_frame_queue SC_QUEUE(struct sc_video_buffer_frame);
struct sc_video_buffer {
struct sc_frame_buffer fb;
sc_tick buffering_time;
// only if buffering_time > 0
struct {
sc_thread thread;
sc_mutex mutex;
sc_cond queue_cond;
sc_cond wait_cond;
struct sc_clock clock;
struct sc_video_buffer_frame_queue queue;
bool stopped;
} b; // buffering
const struct sc_video_buffer_callbacks *cbs;
void *cbs_userdata;
};
struct sc_video_buffer_callbacks {
void (*on_new_frame)(struct sc_video_buffer *vb, bool previous_skipped,
void *userdata);
};
bool
sc_video_buffer_init(struct sc_video_buffer *vb, sc_tick buffering_time,
const struct sc_video_buffer_callbacks *cbs,
void *cbs_userdata);
bool
sc_video_buffer_start(struct sc_video_buffer *vb);
void
sc_video_buffer_stop(struct sc_video_buffer *vb);
void
sc_video_buffer_join(struct sc_video_buffer *vb);
void
sc_video_buffer_destroy(struct sc_video_buffer *vb);
bool
sc_video_buffer_push(struct sc_video_buffer *vb, const AVFrame *frame);
void
sc_video_buffer_consume(struct sc_video_buffer *vb, AVFrame *dst);
#endif

View File

@@ -13,23 +13,23 @@ void test_bytebuf_simple(void) {
assert(ok);
sc_bytebuf_write(&buf, (uint8_t *) "hello", sizeof("hello") - 1);
assert(sc_bytebuf_read_available(&buf) == 5);
assert(sc_bytebuf_read_remaining(&buf) == 5);
sc_bytebuf_read(&buf, data, 4);
assert(!strncmp((char *) data, "hell", 4));
sc_bytebuf_write(&buf, (uint8_t *) " world", sizeof(" world") - 1);
assert(sc_bytebuf_read_available(&buf) == 7);
assert(sc_bytebuf_read_remaining(&buf) == 7);
sc_bytebuf_write(&buf, (uint8_t *) "!", 1);
assert(sc_bytebuf_read_available(&buf) == 8);
assert(sc_bytebuf_read_remaining(&buf) == 8);
sc_bytebuf_read(&buf, &data[4], 8);
assert(sc_bytebuf_read_available(&buf) == 0);
assert(sc_bytebuf_read_remaining(&buf) == 0);
data[12] = '\0';
assert(!strcmp((char *) data, "hello world!"));
assert(sc_bytebuf_read_available(&buf) == 0);
assert(sc_bytebuf_read_remaining(&buf) == 0);
sc_bytebuf_destroy(&buf);
}
@@ -42,31 +42,58 @@ void test_bytebuf_boundaries(void) {
assert(ok);
sc_bytebuf_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
assert(sc_bytebuf_read_available(&buf) == 6);
assert(sc_bytebuf_read_remaining(&buf) == 6);
sc_bytebuf_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
assert(sc_bytebuf_read_available(&buf) == 12);
assert(sc_bytebuf_read_remaining(&buf) == 12);
sc_bytebuf_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
assert(sc_bytebuf_read_available(&buf) == 18);
assert(sc_bytebuf_read_remaining(&buf) == 18);
sc_bytebuf_read(&buf, data, 9);
assert(!strncmp((char *) data, "hello hel", 9));
assert(sc_bytebuf_read_available(&buf) == 9);
assert(sc_bytebuf_read_remaining(&buf) == 9);
sc_bytebuf_write(&buf, (uint8_t *) "world", sizeof("world") - 1);
assert(sc_bytebuf_read_available(&buf) == 14);
assert(sc_bytebuf_read_remaining(&buf) == 14);
sc_bytebuf_write(&buf, (uint8_t *) "!", 1);
assert(sc_bytebuf_read_available(&buf) == 15);
assert(sc_bytebuf_read_remaining(&buf) == 15);
sc_bytebuf_skip(&buf, 3);
assert(sc_bytebuf_read_available(&buf) == 12);
assert(sc_bytebuf_read_remaining(&buf) == 12);
sc_bytebuf_read(&buf, data, 12);
data[12] = '\0';
assert(!strcmp((char *) data, "hello world!"));
assert(sc_bytebuf_read_available(&buf) == 0);
assert(sc_bytebuf_read_remaining(&buf) == 0);
sc_bytebuf_destroy(&buf);
}
void test_bytebuf_overwrite(void) {
struct sc_bytebuf buf;
uint8_t data[10];
bool ok = sc_bytebuf_init(&buf, 10); // so actual capacity is 9
assert(ok);
sc_bytebuf_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
assert(sc_bytebuf_read_remaining(&buf) == 6);
sc_bytebuf_write(&buf, (uint8_t *) "abcdef", sizeof("abcdef") - 1);
assert(sc_bytebuf_read_remaining(&buf) == 9);
sc_bytebuf_read(&buf, data, 9);
assert(!strncmp((char *) data, "lo abcdef", 9));
sc_bytebuf_write(&buf, (uint8_t *) "a very big buffer",
sizeof("a very big buffer") - 1);
assert(sc_bytebuf_read_remaining(&buf) == 9);
sc_bytebuf_read(&buf, data, 9);
assert(!strncmp((char *) data, "ig buffer", 9));
assert(sc_bytebuf_read_remaining(&buf) == 0);
sc_bytebuf_destroy(&buf);
}
@@ -79,37 +106,37 @@ void test_bytebuf_two_steps_write(void) {
assert(ok);
sc_bytebuf_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
assert(sc_bytebuf_read_available(&buf) == 6);
assert(sc_bytebuf_read_remaining(&buf) == 6);
sc_bytebuf_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
assert(sc_bytebuf_read_available(&buf) == 12);
assert(sc_bytebuf_read_remaining(&buf) == 12);
sc_bytebuf_prepare_write(&buf, (uint8_t *) "hello ", sizeof("hello ") - 1);
assert(sc_bytebuf_read_available(&buf) == 12); // write not committed yet
assert(sc_bytebuf_read_remaining(&buf) == 12); // write not committed yet
sc_bytebuf_read(&buf, data, 9);
assert(!strncmp((char *) data, "hello hel", 3));
assert(sc_bytebuf_read_available(&buf) == 3);
assert(sc_bytebuf_read_remaining(&buf) == 3);
sc_bytebuf_commit_write(&buf, sizeof("hello ") - 1);
assert(sc_bytebuf_read_available(&buf) == 9);
assert(sc_bytebuf_read_remaining(&buf) == 9);
sc_bytebuf_prepare_write(&buf, (uint8_t *) "world", sizeof("world") - 1);
assert(sc_bytebuf_read_available(&buf) == 9); // write not committed yet
assert(sc_bytebuf_read_remaining(&buf) == 9); // write not committed yet
sc_bytebuf_commit_write(&buf, sizeof("world") - 1);
assert(sc_bytebuf_read_available(&buf) == 14);
assert(sc_bytebuf_read_remaining(&buf) == 14);
sc_bytebuf_write(&buf, (uint8_t *) "!", 1);
assert(sc_bytebuf_read_available(&buf) == 15);
assert(sc_bytebuf_read_remaining(&buf) == 15);
sc_bytebuf_skip(&buf, 3);
assert(sc_bytebuf_read_available(&buf) == 12);
assert(sc_bytebuf_read_remaining(&buf) == 12);
sc_bytebuf_read(&buf, data, 12);
data[12] = '\0';
assert(!strcmp((char *) data, "hello world!"));
assert(sc_bytebuf_read_available(&buf) == 0);
assert(sc_bytebuf_read_remaining(&buf) == 0);
sc_bytebuf_destroy(&buf);
}
@@ -120,6 +147,7 @@ int main(int argc, char *argv[]) {
test_bytebuf_simple();
test_bytebuf_boundaries();
test_bytebuf_overwrite();
test_bytebuf_two_steps_write();
return 0;

78
app/tests/test_cbuf.c Normal file
View File

@@ -0,0 +1,78 @@
#include "common.h"
#include <assert.h>
#include <string.h>
#include "util/cbuf.h"
struct int_queue CBUF(int, 32);
static void test_cbuf_empty(void) {
struct int_queue queue;
cbuf_init(&queue);
assert(cbuf_is_empty(&queue));
bool push_ok = cbuf_push(&queue, 42);
assert(push_ok);
assert(!cbuf_is_empty(&queue));
int item;
bool take_ok = cbuf_take(&queue, &item);
assert(take_ok);
assert(cbuf_is_empty(&queue));
bool take_empty_ok = cbuf_take(&queue, &item);
assert(!take_empty_ok); // the queue is empty
}
static void test_cbuf_full(void) {
struct int_queue queue;
cbuf_init(&queue);
assert(!cbuf_is_full(&queue));
// fill the queue
for (int i = 0; i < 32; ++i) {
bool ok = cbuf_push(&queue, i);
assert(ok);
}
bool ok = cbuf_push(&queue, 42);
assert(!ok); // the queue if full
int item;
bool take_ok = cbuf_take(&queue, &item);
assert(take_ok);
assert(!cbuf_is_full(&queue));
}
static void test_cbuf_push_take(void) {
struct int_queue queue;
cbuf_init(&queue);
bool push1_ok = cbuf_push(&queue, 42);
assert(push1_ok);
bool push2_ok = cbuf_push(&queue, 35);
assert(push2_ok);
int item;
bool take1_ok = cbuf_take(&queue, &item);
assert(take1_ok);
assert(item == 42);
bool take2_ok = cbuf_take(&queue, &item);
assert(take2_ok);
assert(item == 35);
}
int main(int argc, char *argv[]) {
(void) argc;
(void) argv;
test_cbuf_empty();
test_cbuf_full();
test_cbuf_push_take();
return 0;
}

43
app/tests/test_queue.c Normal file
View File

@@ -0,0 +1,43 @@
#include "common.h"
#include <assert.h>
#include "util/queue.h"
struct foo {
int value;
struct foo *next;
};
static void test_queue(void) {
struct my_queue SC_QUEUE(struct foo) queue;
sc_queue_init(&queue);
assert(sc_queue_is_empty(&queue));
struct foo v1 = { .value = 42 };
struct foo v2 = { .value = 27 };
sc_queue_push(&queue, next, &v1);
sc_queue_push(&queue, next, &v2);
struct foo *foo;
assert(!sc_queue_is_empty(&queue));
sc_queue_take(&queue, next, &foo);
assert(foo->value == 42);
assert(!sc_queue_is_empty(&queue));
sc_queue_take(&queue, next, &foo);
assert(foo->value == 27);
assert(sc_queue_is_empty(&queue));
}
int main(int argc, char *argv[]) {
(void) argc;
(void) argv;
test_queue();
return 0;
}

View File

@@ -1,197 +0,0 @@
#include "common.h"
#include <assert.h>
#include "util/vecdeque.h"
#define pr(pv) \
({ \
fprintf(stderr, "cap=%lu origin=%lu size=%lu\n", (pv)->cap, (pv)->origin, (pv)->size); \
for (size_t i = 0; i < (pv)->cap; ++i) \
fprintf(stderr, "%d ", (pv)->data[i]); \
fprintf(stderr, "\n"); \
})
static void test_vecdeque_push_pop(void) {
struct SC_VECDEQUE(int) vdq = SC_VECDEQUE_INITIALIZER;
assert(sc_vecdeque_is_empty(&vdq));
assert(sc_vecdeque_size(&vdq) == 0);
bool ok = sc_vecdeque_push(&vdq, 5);
assert(ok);
assert(sc_vecdeque_size(&vdq) == 1);
ok = sc_vecdeque_push(&vdq, 12);
assert(ok);
assert(sc_vecdeque_size(&vdq) == 2);
int v = sc_vecdeque_pop(&vdq);
assert(v == 5);
assert(sc_vecdeque_size(&vdq) == 1);
ok = sc_vecdeque_push(&vdq, 7);
assert(ok);
assert(sc_vecdeque_size(&vdq) == 2);
int *p = sc_vecdeque_popref(&vdq);
assert(p);
assert(*p == 12);
assert(sc_vecdeque_size(&vdq) == 1);
v = sc_vecdeque_pop(&vdq);
assert(v == 7);
assert(sc_vecdeque_size(&vdq) == 0);
assert(sc_vecdeque_is_empty(&vdq));
sc_vecdeque_destroy(&vdq);
}
static void test_vecdeque_reserve(void) {
struct SC_VECDEQUE(int) vdq = SC_VECDEQUE_INITIALIZER;
bool ok = sc_vecdeque_reserve(&vdq, 20);
assert(ok);
assert(vdq.cap == 20);
assert(sc_vecdeque_size(&vdq) == 0);
for (size_t i = 0; i < 20; ++i) {
ok = sc_vecdeque_push(&vdq, i);
assert(ok);
}
assert(sc_vecdeque_size(&vdq) == 20);
// It is now full
for (int i = 0; i < 5; ++i) {
int v = sc_vecdeque_pop(&vdq);
assert(v == i);
}
assert(sc_vecdeque_size(&vdq) == 15);
for (int i = 20; i < 25; ++i) {
ok = sc_vecdeque_push(&vdq, i);
assert(ok);
}
assert(sc_vecdeque_size(&vdq) == 20);
assert(vdq.cap == 20);
// Now, the content wraps around the ring buffer:
// 20 21 22 23 24 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
// ^
// origin
// It is now full, let's reserve some space
ok = sc_vecdeque_reserve(&vdq, 30);
assert(ok);
assert(vdq.cap == 30);
assert(sc_vecdeque_size(&vdq) == 20);
for (int i = 0; i < 20; ++i) {
// We should retrieve the items we inserted in order
int v = sc_vecdeque_pop(&vdq);
assert(v == i + 5);
}
assert(sc_vecdeque_size(&vdq) == 0);
sc_vecdeque_destroy(&vdq);
}
static void test_vecdeque_grow() {
struct SC_VECDEQUE(int) vdq = SC_VECDEQUE_INITIALIZER;
bool ok = sc_vecdeque_reserve(&vdq, 20);
assert(ok);
assert(vdq.cap == 20);
assert(sc_vecdeque_size(&vdq) == 0);
for (int i = 0; i < 500; ++i) {
ok = sc_vecdeque_push(&vdq, i);
assert(ok);
}
assert(sc_vecdeque_size(&vdq) == 500);
for (int i = 0; i < 100; ++i) {
int v = sc_vecdeque_pop(&vdq);
assert(v == i);
}
assert(sc_vecdeque_size(&vdq) == 400);
for (int i = 500; i < 1000; ++i) {
ok = sc_vecdeque_push(&vdq, i);
assert(ok);
}
assert(sc_vecdeque_size(&vdq) == 900);
for (int i = 100; i < 1000; ++i) {
int v = sc_vecdeque_pop(&vdq);
assert(v == i);
}
assert(sc_vecdeque_size(&vdq) == 0);
sc_vecdeque_destroy(&vdq);
}
static void test_vecdeque_push_hole() {
struct SC_VECDEQUE(int) vdq = SC_VECDEQUE_INITIALIZER;
bool ok = sc_vecdeque_reserve(&vdq, 20);
assert(ok);
assert(vdq.cap == 20);
assert(sc_vecdeque_size(&vdq) == 0);
for (int i = 0; i < 20; ++i) {
int *p = sc_vecdeque_push_hole(&vdq);
assert(p);
*p = i * 10;
}
assert(sc_vecdeque_size(&vdq) == 20);
for (int i = 0; i < 10; ++i) {
int v = sc_vecdeque_pop(&vdq);
assert(v == i * 10);
}
assert(sc_vecdeque_size(&vdq) == 10);
for (int i = 20; i < 30; ++i) {
int *p = sc_vecdeque_push_hole(&vdq);
assert(p);
*p = i * 10;
}
assert(sc_vecdeque_size(&vdq) == 20);
for (int i = 10; i < 30; ++i) {
int v = sc_vecdeque_pop(&vdq);
assert(v == i * 10);
}
assert(sc_vecdeque_size(&vdq) == 0);
sc_vecdeque_destroy(&vdq);
}
int main(int argc, char *argv[]) {
(void) argc;
(void) argv;
test_vecdeque_push_pop();
test_vecdeque_reserve();
test_vecdeque_grow();
test_vecdeque_push_hole();
return 0;
}

View File

@@ -16,6 +16,10 @@ cpu = 'i686'
endian = 'little'
[properties]
prebuilt_ffmpeg = 'ffmpeg-6.0-scrcpy-2/win32'
ffmpeg_avcodec = 'avcodec-58'
ffmpeg_avformat = 'avformat-58'
ffmpeg_avutil = 'avutil-56'
prebuilt_ffmpeg = 'ffmpeg-win32-4.3.1'
prebuilt_sdl2 = 'SDL2-2.26.1/i686-w64-mingw32'
prebuilt_libusb = 'libusb-1.0.26/libusb-MinGW-Win32'
prebuilt_libusb_root = 'libusb-1.0.26'
prebuilt_libusb = 'libusb-1.0.26/MinGW-Win32'

View File

@@ -16,6 +16,10 @@ cpu = 'x86_64'
endian = 'little'
[properties]
prebuilt_ffmpeg = 'ffmpeg-6.0-scrcpy-2/win64'
ffmpeg_avcodec = 'avcodec-59'
ffmpeg_avformat = 'avformat-59'
ffmpeg_avutil = 'avutil-57'
prebuilt_ffmpeg = 'ffmpeg-win64-5.1.2'
prebuilt_sdl2 = 'SDL2-2.26.1/x86_64-w64-mingw32'
prebuilt_libusb = 'libusb-1.0.26/libusb-MinGW-x64'
prebuilt_libusb_root = 'libusb-1.0.26'
prebuilt_libusb = 'libusb-1.0.26/MinGW-x64'

View File

@@ -11,7 +11,7 @@
.PHONY: default clean \
test \
build-server \
prepare-deps \
prepare-deps-win32 prepare-deps-win64 \
build-win32 build-win64 \
dist-win32 dist-win64 \
zip-win32 zip-win64 \
@@ -62,13 +62,19 @@ build-server:
meson setup "$(SERVER_BUILD_DIR)" --buildtype release -Dcompile_app=false )
ninja -C "$(SERVER_BUILD_DIR)"
prepare-deps:
prepare-deps-win32:
@app/prebuilt-deps/prepare-adb.sh
@app/prebuilt-deps/prepare-sdl.sh
@app/prebuilt-deps/prepare-ffmpeg.sh
@app/prebuilt-deps/prepare-ffmpeg-win32.sh
@app/prebuilt-deps/prepare-libusb.sh
build-win32: prepare-deps
prepare-deps-win64:
@app/prebuilt-deps/prepare-adb.sh
@app/prebuilt-deps/prepare-sdl.sh
@app/prebuilt-deps/prepare-ffmpeg-win64.sh
@app/prebuilt-deps/prepare-libusb.sh
build-win32: prepare-deps-win32
[ -d "$(WIN32_BUILD_DIR)" ] || ( mkdir "$(WIN32_BUILD_DIR)" && \
meson setup "$(WIN32_BUILD_DIR)" \
--cross-file cross_win32.txt \
@@ -77,7 +83,7 @@ build-win32: prepare-deps
-Dportable=true )
ninja -C "$(WIN32_BUILD_DIR)"
build-win64: prepare-deps
build-win64: prepare-deps-win64
[ -d "$(WIN64_BUILD_DIR)" ] || ( mkdir "$(WIN64_BUILD_DIR)" && \
meson setup "$(WIN64_BUILD_DIR)" \
--cross-file cross_win64.txt \
@@ -94,16 +100,16 @@ dist-win32: build-server build-win32
cp app/data/scrcpy-noconsole.vbs "$(DIST)/$(WIN32_TARGET_DIR)"
cp app/data/icon.png "$(DIST)/$(WIN32_TARGET_DIR)"
cp app/data/open_a_terminal_here.bat "$(DIST)/$(WIN32_TARGET_DIR)"
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win32/bin/avutil-58.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win32/bin/avcodec-60.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win32/bin/avformat-60.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win32/bin/swresample-4.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win32/bin/zlib1.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-win32-4.3.1/bin/avutil-56.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-win32-4.3.1/bin/avcodec-58.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-win32-4.3.1/bin/avformat-58.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-win32-4.3.1/bin/swresample-3.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-win32-4.3.1/bin/swscale-5.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/platform-tools-33.0.3/adb.exe "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/platform-tools-33.0.3/AdbWinApi.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/platform-tools-33.0.3/AdbWinUsbApi.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/SDL2-2.26.1/i686-w64-mingw32/bin/SDL2.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/libusb-1.0.26/libusb-MinGW-Win32/bin/msys-usb-1.0.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/libusb-1.0.26/MinGW-Win32/msys-usb-1.0.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
dist-win64: build-server build-win64
mkdir -p "$(DIST)/$(WIN64_TARGET_DIR)"
@@ -113,16 +119,16 @@ dist-win64: build-server build-win64
cp app/data/scrcpy-noconsole.vbs "$(DIST)/$(WIN64_TARGET_DIR)"
cp app/data/icon.png "$(DIST)/$(WIN64_TARGET_DIR)"
cp app/data/open_a_terminal_here.bat "$(DIST)/$(WIN64_TARGET_DIR)"
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win64/bin/avutil-58.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win64/bin/avcodec-60.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win64/bin/avformat-60.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win64/bin/swresample-4.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-2/win64/bin/zlib1.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-win64-5.1.2/bin/avutil-57.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-win64-5.1.2/bin/avcodec-59.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-win64-5.1.2/bin/avformat-59.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-win64-5.1.2/bin/swresample-4.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-win64-5.1.2/bin/swscale-6.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/platform-tools-33.0.3/adb.exe "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/platform-tools-33.0.3/AdbWinApi.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/platform-tools-33.0.3/AdbWinUsbApi.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/SDL2-2.26.1/x86_64-w64-mingw32/bin/SDL2.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/libusb-1.0.26/libusb-MinGW-x64/bin/msys-usb-1.0.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/libusb-1.0.26/MinGW-x64/msys-usb-1.0.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
zip-win32: dist-win32
cd "$(DIST)"; \

View File

@@ -1,7 +0,0 @@
package com.genymobile.scrcpy;
public interface AsyncProcessor {
void start();
void stop();
void join() throws InterruptedException;
}

View File

@@ -1,148 +0,0 @@
package com.genymobile.scrcpy;
import com.genymobile.scrcpy.wrappers.ServiceManager;
import android.annotation.SuppressLint;
import android.annotation.TargetApi;
import android.content.ComponentName;
import android.content.Intent;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.AudioTimestamp;
import android.media.MediaCodec;
import android.media.MediaRecorder;
import android.os.Build;
import android.os.SystemClock;
import java.io.IOException;
import java.nio.ByteBuffer;
public final class AudioCapture {
public static final int SAMPLE_RATE = 48000;
public static final int CHANNEL_CONFIG = AudioFormat.CHANNEL_IN_STEREO;
public static final int CHANNELS = 2;
public static final int FORMAT = AudioFormat.ENCODING_PCM_16BIT;
public static final int BYTES_PER_SAMPLE = 2;
private AudioRecord recorder;
private final AudioTimestamp timestamp = new AudioTimestamp();
private long previousPts = 0;
private long nextPts = 0;
public static int millisToBytes(int millis) {
return SAMPLE_RATE * CHANNELS * BYTES_PER_SAMPLE * millis / 1000;
}
private static AudioFormat createAudioFormat() {
AudioFormat.Builder builder = new AudioFormat.Builder();
builder.setEncoding(FORMAT);
builder.setSampleRate(SAMPLE_RATE);
builder.setChannelMask(CHANNEL_CONFIG);
return builder.build();
}
@TargetApi(Build.VERSION_CODES.M)
@SuppressLint({"WrongConstant", "MissingPermission"})
private static AudioRecord createAudioRecord() {
AudioRecord.Builder builder = new AudioRecord.Builder();
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.S) {
// On older APIs, Workarounds.fillAppInfo() must be called beforehand
builder.setContext(FakeContext.get());
}
builder.setAudioSource(MediaRecorder.AudioSource.REMOTE_SUBMIX);
builder.setAudioFormat(createAudioFormat());
int minBufferSize = AudioRecord.getMinBufferSize(SAMPLE_RATE, CHANNEL_CONFIG, FORMAT);
// This buffer size does not impact latency
builder.setBufferSizeInBytes(8 * minBufferSize);
return builder.build();
}
private static void startWorkaroundAndroid11() {
if (Build.VERSION.SDK_INT == Build.VERSION_CODES.R) {
// Android 11 requires Apps to be at foreground to record audio.
// Normally, each App has its own user ID, so Android checks whether the requesting App has the user ID that's at the foreground.
// But scrcpy server is NOT an App, it's a Java application started from Android shell, so it has the same user ID (2000) with Android
// shell ("com.android.shell").
// If there is an Activity from Android shell running at foreground, then the permission system will believe scrcpy is also in the
// foreground.
if (Build.VERSION.SDK_INT == Build.VERSION_CODES.R) {
Intent intent = new Intent(Intent.ACTION_MAIN);
intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
intent.addCategory(Intent.CATEGORY_LAUNCHER);
intent.setComponent(new ComponentName(FakeContext.PACKAGE_NAME, "com.android.shell.HeapDumpActivity"));
ServiceManager.getActivityManager().startActivityAsUserWithFeature(intent);
// Wait for activity to start
SystemClock.sleep(150);
}
}
}
private static void stopWorkaroundAndroid11() {
if (Build.VERSION.SDK_INT == Build.VERSION_CODES.R) {
ServiceManager.getActivityManager().forceStopPackage(FakeContext.PACKAGE_NAME);
}
}
public void start() throws AudioCaptureForegroundException {
startWorkaroundAndroid11();
try {
recorder = createAudioRecord();
recorder.startRecording();
} catch (UnsupportedOperationException e) {
if (Build.VERSION.SDK_INT == Build.VERSION_CODES.R) {
Ln.e("Failed to start audio capture");
Ln.e("On Android 11, it is only possible to capture in foreground, make sure that the device is unlocked when starting scrcpy.");
throw new AudioCaptureForegroundException();
}
throw e;
} finally {
stopWorkaroundAndroid11();
}
}
public void stop() {
if (recorder != null) {
// Will call .stop() if necessary, without throwing an IllegalStateException
recorder.release();
}
}
@TargetApi(Build.VERSION_CODES.N)
public int read(ByteBuffer directBuffer, int size, MediaCodec.BufferInfo outBufferInfo) throws IOException {
int r = recorder.read(directBuffer, size);
if (r < 0) {
return r;
}
long pts;
int ret = recorder.getTimestamp(timestamp, AudioTimestamp.TIMEBASE_MONOTONIC);
if (ret == AudioRecord.SUCCESS) {
pts = timestamp.nanoTime / 1000;
} else {
if (nextPts == 0) {
Ln.w("Could not get any audio timestamp");
}
// compute from previous timestamp and packet size
pts = nextPts;
}
long durationUs = r * 1000000 / (CHANNELS * BYTES_PER_SAMPLE * SAMPLE_RATE);
nextPts = pts + durationUs;
if (previousPts != 0 && pts < previousPts) {
// Audio PTS may come from two sources:
// - recorder.getTimestamp() if the call works;
// - an estimation from the previous PTS and the packet size as a fallback.
//
// Therefore, the property that PTS are monotonically increasing is no guaranteed in corner cases, so enforce it.
pts = previousPts + 1;
}
previousPts = pts;
outBufferInfo.set(0, r, pts, 0);
return r;
}
}

View File

@@ -1,7 +0,0 @@
package com.genymobile.scrcpy;
/**
* Exception thrown if audio capture failed on Android 11 specifically because the running App (shell) was not in foreground.
*/
public class AudioCaptureForegroundException extends Exception {
}

View File

@@ -4,8 +4,7 @@ import android.media.MediaFormat;
public enum AudioCodec implements Codec {
OPUS(0x6f_70_75_73, "opus", MediaFormat.MIMETYPE_AUDIO_OPUS),
AAC(0x00_61_61_63, "aac", MediaFormat.MIMETYPE_AUDIO_AAC),
RAW(0x00_72_61_77, "raw", MediaFormat.MIMETYPE_AUDIO_RAW);
AAC(0x00_61_61_63, "aac", MediaFormat.MIMETYPE_AUDIO_AAC);
private final int id; // 4-byte ASCII representation of the name
private final String name;

View File

@@ -1,12 +1,22 @@
package com.genymobile.scrcpy;
import com.genymobile.scrcpy.wrappers.ServiceManager;
import android.annotation.SuppressLint;
import android.annotation.TargetApi;
import android.content.ComponentName;
import android.content.Intent;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.AudioTimestamp;
import android.media.MediaCodec;
import android.media.MediaFormat;
import android.media.MediaRecorder;
import android.os.Build;
import android.os.Handler;
import android.os.HandlerThread;
import android.os.Looper;
import android.os.SystemClock;
import java.io.IOException;
import java.nio.ByteBuffer;
@@ -14,7 +24,7 @@ import java.util.List;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
public final class AudioEncoder implements AsyncProcessor {
public final class AudioEncoder {
private static class InputTask {
private final int index;
@@ -34,11 +44,14 @@ public final class AudioEncoder implements AsyncProcessor {
}
}
private static final int SAMPLE_RATE = AudioCapture.SAMPLE_RATE;
private static final int CHANNELS = AudioCapture.CHANNELS;
private static final int SAMPLE_RATE = 48000;
private static final int CHANNEL_CONFIG = AudioFormat.CHANNEL_IN_STEREO;
private static final int CHANNELS = 2;
private static final int FORMAT = AudioFormat.ENCODING_PCM_16BIT;
private static final int BYTES_PER_SAMPLE = 2;
private static final int READ_MS = 5; // milliseconds
private static final int READ_SIZE = AudioCapture.millisToBytes(READ_MS);
private static final int BUFFER_MS = 5; // milliseconds
private static final int BUFFER_SIZE = SAMPLE_RATE * CHANNELS * BYTES_PER_SAMPLE * BUFFER_MS / 1000;
private final Streamer streamer;
private final int bitRate;
@@ -65,6 +78,29 @@ public final class AudioEncoder implements AsyncProcessor {
this.encoderName = encoderName;
}
private static AudioFormat createAudioFormat() {
AudioFormat.Builder builder = new AudioFormat.Builder();
builder.setEncoding(FORMAT);
builder.setSampleRate(SAMPLE_RATE);
builder.setChannelMask(CHANNEL_CONFIG);
return builder.build();
}
@TargetApi(Build.VERSION_CODES.M)
@SuppressLint({"WrongConstant", "MissingPermission"})
private static AudioRecord createAudioRecord() {
AudioRecord.Builder builder = new AudioRecord.Builder();
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.S) {
// On older APIs, Workarounds.fillAppInfo() must be called beforehand
builder.setContext(FakeContext.get());
}
builder.setAudioSource(MediaRecorder.AudioSource.REMOTE_SUBMIX);
builder.setAudioFormat(createAudioFormat());
int minBufferSize = AudioRecord.getMinBufferSize(SAMPLE_RATE, CHANNEL_CONFIG, FORMAT);
builder.setBufferSizeInBytes(minBufferSize);
return builder.build();
}
private static MediaFormat createFormat(String mimeType, int bitRate, List<CodecOption> codecOptions) {
MediaFormat format = new MediaFormat();
format.setString(MediaFormat.KEY_MIME, mimeType);
@@ -85,18 +121,47 @@ public final class AudioEncoder implements AsyncProcessor {
}
@TargetApi(Build.VERSION_CODES.N)
private void inputThread(MediaCodec mediaCodec, AudioCapture capture) throws IOException, InterruptedException {
final MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
private void inputThread(MediaCodec mediaCodec, AudioRecord recorder) throws IOException, InterruptedException {
final AudioTimestamp timestamp = new AudioTimestamp();
long previousPts = 0;
long nextPts = 0;
while (!Thread.currentThread().isInterrupted()) {
InputTask task = inputTasks.take();
ByteBuffer buffer = mediaCodec.getInputBuffer(task.index);
int r = capture.read(buffer, READ_SIZE, bufferInfo);
int r = recorder.read(buffer, BUFFER_SIZE);
if (r < 0) {
throw new IOException("Could not read audio: " + r);
}
mediaCodec.queueInputBuffer(task.index, bufferInfo.offset, bufferInfo.size, bufferInfo.presentationTimeUs, bufferInfo.flags);
long pts;
int ret = recorder.getTimestamp(timestamp, AudioTimestamp.TIMEBASE_MONOTONIC);
if (ret == AudioRecord.SUCCESS) {
pts = timestamp.nanoTime / 1000;
} else {
if (nextPts == 0) {
Ln.w("Could not get any audio timestamp");
}
// compute from previous timestamp and packet size
pts = nextPts;
}
long durationMs = r * 1000 / CHANNELS / SAMPLE_RATE;
nextPts = pts + durationMs;
if (previousPts != 0 && pts < previousPts) {
// Audio PTS may come from two sources:
// - recorder.getTimestamp() if the call works;
// - an estimation from the previous PTS and the packet size as a fallback.
//
// Therefore, the property that PTS are monotonically increasing is no guaranteed in corner cases, so enforce it.
pts = previousPts + 1;
}
mediaCodec.queueInputBuffer(task.index, 0, r, pts, 0);
previousPts = pts;
}
}
@@ -118,8 +183,6 @@ public final class AudioEncoder implements AsyncProcessor {
thread = new Thread(() -> {
try {
encode();
} catch (ConfigurationException | AudioCaptureForegroundException e) {
// Do not print stack trace, a user-friendly error-message has already been logged
} catch (IOException e) {
Ln.e("Audio encoding error", e);
} finally {
@@ -157,8 +220,34 @@ public final class AudioEncoder implements AsyncProcessor {
}
}
private static void startWorkaroundAndroid11() {
if (Build.VERSION.SDK_INT == Build.VERSION_CODES.R) {
// Android 11 requires Apps to be at foreground to record audio.
// Normally, each App has its own user ID, so Android checks whether the requesting App has the user ID that's at the foreground.
// But Scrcpy server is NOT an App, it's a Java application started from Android shell, so it has the same user ID (2000) with Android
// shell ("com.android.shell").
// If there is an Activity from Android shell running at foreground, then the permission system will believe Scrcpy is also in the
// foreground.
if (Build.VERSION.SDK_INT == Build.VERSION_CODES.R) {
Intent intent = new Intent(Intent.ACTION_MAIN);
intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
intent.addCategory(Intent.CATEGORY_LAUNCHER);
intent.setComponent(new ComponentName(FakeContext.PACKAGE_NAME, "com.android.shell.HeapDumpActivity"));
ServiceManager.getActivityManager().startActivityAsUserWithFeature(intent);
// Wait for activity to start
SystemClock.sleep(150);
}
}
}
private static void stopWorkaroundAndroid11() {
if (Build.VERSION.SDK_INT == Build.VERSION_CODES.R) {
ServiceManager.getActivityManager().forceStopPackage(FakeContext.PACKAGE_NAME);
}
}
@TargetApi(Build.VERSION_CODES.M)
public void encode() throws IOException, ConfigurationException, AudioCaptureForegroundException {
public void encode() throws IOException {
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.R) {
Ln.w("Audio disabled: it is not supported before Android 11");
streamer.writeDisableStream(false);
@@ -166,9 +255,11 @@ public final class AudioEncoder implements AsyncProcessor {
}
MediaCodec mediaCodec = null;
AudioCapture capture = new AudioCapture();
AudioRecord recorder = null;
boolean mediaCodecStarted = false;
boolean recorderStarted = false;
boolean configurationError = false;
try {
Codec codec = streamer.getCodec();
mediaCodec = createMediaCodec(codec, encoderName);
@@ -180,13 +271,26 @@ public final class AudioEncoder implements AsyncProcessor {
mediaCodec.setCallback(new EncoderCallback(), new Handler(mediaCodecThread.getLooper()));
mediaCodec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
capture.start();
startWorkaroundAndroid11();
try {
recorder = createAudioRecord();
recorder.startRecording();
} catch (UnsupportedOperationException e) {
if (Build.VERSION.SDK_INT == Build.VERSION_CODES.R) {
Ln.e("Failed to start audio capture");
Ln.e("On Android 11, it is only possible to capture in foreground, make sure that the device is unlocked when starting scrcpy.");
throw new ConfigurationException("Unsupported audio capture");
}
} finally {
stopWorkaroundAndroid11();
}
recorderStarted = true;
final MediaCodec mediaCodecRef = mediaCodec;
final AudioCapture captureRef = capture;
final AudioRecord recorderRef = recorder;
inputThread = new Thread(() -> {
try {
inputThread(mediaCodecRef, captureRef);
inputThread(mediaCodecRef, recorderRef);
} catch (IOException | InterruptedException e) {
Ln.e("Audio capture error", e);
} finally {
@@ -216,14 +320,15 @@ public final class AudioEncoder implements AsyncProcessor {
waitEnded();
} catch (ConfigurationException e) {
// Notify the error to make scrcpy exit
streamer.writeDisableStream(true);
throw e;
} catch (Throwable e) {
// Notify the client that the audio could not be captured
streamer.writeDisableStream(false);
throw e;
// Do not print stack trace, a user-friendly error-message has already been logged
// Notify the error to scrcpy to make it exit
configurationError = true;
} finally {
if (!recorderStarted) {
// Notify the client that the audio could not be captured
streamer.writeDisableStream(configurationError);
}
// Cleanup everything (either at the end or on error at any step of the initialization)
if (mediaCodecThread != null) {
Looper looper = mediaCodecThread.getLooper();
@@ -259,8 +364,11 @@ public final class AudioEncoder implements AsyncProcessor {
}
mediaCodec.release();
}
if (capture != null) {
capture.stop();
if (recorder != null) {
if (recorderStarted) {
recorder.stop();
}
recorder.release();
}
}
}

View File

@@ -1,75 +0,0 @@
package com.genymobile.scrcpy;
import android.media.MediaCodec;
import java.io.IOException;
import java.nio.ByteBuffer;
public final class AudioRawRecorder implements AsyncProcessor {
private final Streamer streamer;
private Thread thread;
private static final int READ_MS = 5; // milliseconds
private static final int READ_SIZE = AudioCapture.millisToBytes(READ_MS);
public AudioRawRecorder(Streamer streamer) {
this.streamer = streamer;
}
private void record() throws IOException, AudioCaptureForegroundException {
final ByteBuffer buffer = ByteBuffer.allocateDirect(READ_SIZE);
final MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
AudioCapture capture = new AudioCapture();
try {
capture.start();
streamer.writeHeader();
while (!Thread.currentThread().isInterrupted()) {
buffer.position(0);
int r = capture.read(buffer, READ_SIZE, bufferInfo);
if (r < 0) {
throw new IOException("Could not read audio: " + r);
}
buffer.limit(r);
streamer.writePacket(buffer, bufferInfo);
}
} catch (Throwable e) {
// Notify the client that the audio could not be captured
streamer.writeDisableStream(false);
throw e;
} finally {
capture.stop();
}
}
public void start() {
thread = new Thread(() -> {
try {
record();
} catch (AudioCaptureForegroundException e) {
// Do not print stack trace, a user-friendly error-message has already been logged
} catch (IOException e) {
Ln.e("Audio recording error", e);
} finally {
Ln.d("Audio recorder stopped");
}
});
thread.start();
}
public void stop() {
if (thread != null) {
thread.interrupt();
}
}
public void join() throws InterruptedException {
if (thread != null) {
thread.join();
}
}
}

View File

@@ -14,7 +14,7 @@ import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
public class Controller implements AsyncProcessor {
public class Controller {
private static final int DEFAULT_DEVICE_ID = 0;

View File

@@ -110,11 +110,6 @@ public final class DesktopConnection implements Closeable {
videoSocket.shutdownInput();
videoSocket.shutdownOutput();
videoSocket.close();
if (audioSocket != null) {
audioSocket.shutdownInput();
audioSocket.shutdownOutput();
audioSocket.close();
}
if (controlSocket != null) {
controlSocket.shutdownInput();
controlSocket.shutdownOutput();

View File

@@ -39,28 +39,28 @@ public final class Ln {
public static void v(String message) {
if (isEnabled(Level.VERBOSE)) {
Log.v(TAG, message);
System.out.print(PREFIX + "VERBOSE: " + message + '\n');
System.out.println(PREFIX + "VERBOSE: " + message);
}
}
public static void d(String message) {
if (isEnabled(Level.DEBUG)) {
Log.d(TAG, message);
System.out.print(PREFIX + "DEBUG: " + message + '\n');
System.out.println(PREFIX + "DEBUG: " + message);
}
}
public static void i(String message) {
if (isEnabled(Level.INFO)) {
Log.i(TAG, message);
System.out.print(PREFIX + "INFO: " + message + '\n');
System.out.println(PREFIX + "INFO: " + message);
}
}
public static void w(String message, Throwable throwable) {
if (isEnabled(Level.WARN)) {
Log.w(TAG, message, throwable);
System.err.print(PREFIX + "WARN: " + message + '\n');
System.out.println(PREFIX + "WARN: " + message);
if (throwable != null) {
throwable.printStackTrace();
}
@@ -74,7 +74,7 @@ public final class Ln {
public static void e(String message, Throwable throwable) {
if (isEnabled(Level.ERROR)) {
Log.e(TAG, message, throwable);
System.err.print(PREFIX + "ERROR: " + message + "\n");
System.out.println(PREFIX + "ERROR: " + message);
if (throwable != null) {
throwable.printStackTrace();
}

View File

@@ -13,7 +13,7 @@ public class Options {
private VideoCodec videoCodec = VideoCodec.H264;
private AudioCodec audioCodec = AudioCodec.OPUS;
private int videoBitRate = 8000000;
private int audioBitRate = 128000;
private int audioBitRate = 196000;
private int maxFps;
private int lockVideoOrientation = -1;
private boolean tunnelForward;

View File

@@ -5,7 +5,6 @@ import android.os.BatteryManager;
import android.os.Build;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Locale;
@@ -92,7 +91,8 @@ public final class Server {
Workarounds.fillAppInfo();
}
List<AsyncProcessor> asyncProcessors = new ArrayList<>();
Controller controller = null;
AudioEncoder audioEncoder = null;
try (DesktopConnection connection = DesktopConnection.open(scid, tunnelForward, audio, control, sendDummyByte)) {
if (options.getSendDeviceMeta()) {
@@ -101,34 +101,24 @@ public final class Server {
}
if (control) {
Controller controller = new Controller(device, connection, options.getClipboardAutosync(), options.getPowerOn());
device.setClipboardListener(text -> controller.getSender().pushClipboardText(text));
asyncProcessors.add(controller);
controller = new Controller(device, connection, options.getClipboardAutosync(), options.getPowerOn());
controller.start();
final Controller controllerRef = controller;
device.setClipboardListener(text -> controllerRef.getSender().pushClipboardText(text));
}
if (audio) {
AudioCodec audioCodec = options.getAudioCodec();
Streamer audioStreamer = new Streamer(connection.getAudioFd(), audioCodec, options.getSendCodecId(),
Streamer audioStreamer = new Streamer(connection.getAudioFd(), options.getAudioCodec(), options.getSendCodecId(),
options.getSendFrameMeta());
AsyncProcessor audioRecorder;
if (audioCodec == AudioCodec.RAW) {
audioRecorder = new AudioRawRecorder(audioStreamer);
} else {
audioRecorder = new AudioEncoder(audioStreamer, options.getAudioBitRate(), options.getAudioCodecOptions(),
options.getAudioEncoder());
}
asyncProcessors.add(audioRecorder);
audioEncoder = new AudioEncoder(audioStreamer, options.getAudioBitRate(), options.getAudioCodecOptions(), options.getAudioEncoder());
audioEncoder.start();
}
Streamer videoStreamer = new Streamer(connection.getVideoFd(), options.getVideoCodec(), options.getSendCodecId(),
options.getSendFrameMeta());
ScreenEncoder screenEncoder = new ScreenEncoder(device, videoStreamer, options.getVideoBitRate(), options.getMaxFps(),
options.getVideoCodecOptions(), options.getVideoEncoder(), options.getDownsizeOnError());
for (AsyncProcessor asyncProcessor : asyncProcessors) {
asyncProcessor.start();
}
try {
// synchronous
screenEncoder.streamScreen();
@@ -141,14 +131,20 @@ public final class Server {
} finally {
Ln.d("Screen streaming stopped");
initThread.interrupt();
for (AsyncProcessor asyncProcessor : asyncProcessors) {
asyncProcessor.stop();
if (audioEncoder != null) {
audioEncoder.stop();
}
if (controller != null) {
controller.stop();
}
try {
initThread.join();
for (AsyncProcessor asyncProcessor : asyncProcessors) {
asyncProcessor.join();
if (audioEncoder != null) {
audioEncoder.join();
}
if (controller != null) {
controller.join();
}
} catch (InterruptedException e) {
// ignore

View File

@@ -51,34 +51,27 @@ public final class Streamer {
IO.writeFully(fd, code, 0, code.length);
}
public void writePacket(ByteBuffer buffer, long pts, boolean config, boolean keyFrame) throws IOException {
if (config && codec == AudioCodec.OPUS) {
fixOpusConfigPacket(buffer);
public void writePacket(ByteBuffer codecBuffer, MediaCodec.BufferInfo bufferInfo) throws IOException {
if (codec == AudioCodec.OPUS) {
fixOpusConfigPacket(codecBuffer, bufferInfo);
}
if (sendFrameMeta) {
writeFrameMeta(fd, buffer.remaining(), pts, config, keyFrame);
writeFrameMeta(fd, bufferInfo, codecBuffer.remaining());
}
IO.writeFully(fd, buffer);
IO.writeFully(fd, codecBuffer);
}
public void writePacket(ByteBuffer codecBuffer, MediaCodec.BufferInfo bufferInfo) throws IOException {
long pts = bufferInfo.presentationTimeUs;
boolean config = (bufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0;
boolean keyFrame = (bufferInfo.flags & MediaCodec.BUFFER_FLAG_KEY_FRAME) != 0;
writePacket(codecBuffer, pts, config, keyFrame);
}
private void writeFrameMeta(FileDescriptor fd, int packetSize, long pts, boolean config, boolean keyFrame) throws IOException {
private void writeFrameMeta(FileDescriptor fd, MediaCodec.BufferInfo bufferInfo, int packetSize) throws IOException {
headerBuffer.clear();
long ptsAndFlags;
if (config) {
if ((bufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
ptsAndFlags = PACKET_FLAG_CONFIG; // non-media data packet
} else {
ptsAndFlags = pts;
if (keyFrame) {
ptsAndFlags = bufferInfo.presentationTimeUs;
if ((bufferInfo.flags & MediaCodec.BUFFER_FLAG_KEY_FRAME) != 0) {
ptsAndFlags |= PACKET_FLAG_KEY_FRAME;
}
}
@@ -89,7 +82,7 @@ public final class Streamer {
IO.writeFully(fd, headerBuffer);
}
private static void fixOpusConfigPacket(ByteBuffer buffer) throws IOException {
private static void fixOpusConfigPacket(ByteBuffer buffer, MediaCodec.BufferInfo bufferInfo) throws IOException {
// Here is an example of the config packet received for an OPUS stream:
//
// 00000000 41 4f 50 55 53 48 44 52 13 00 00 00 00 00 00 00 |AOPUSHDR........|
@@ -106,6 +99,11 @@ public final class Streamer {
//
// <https://developer.android.com/reference/android/media/MediaCodec#CSD>
boolean isConfig = (bufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0;
if (!isConfig) {
return;
}
if (buffer.remaining() < 16) {
throw new IOException("Not enough data in OPUS config packet");
}

BIN
u/.ninja_deps Normal file

Binary file not shown.

62
u/.ninja_log Normal file
View File

@@ -0,0 +1,62 @@
# ninja log v5
0 243 1542027815 build.ninja ef03cd8523486e97
0 58 1542027815 app/app@@scrcpy@exe/src_str_util.c.o 2aa692e7aa83914
0 87 1542027815 app/app@@scrcpy@exe/src_sys_unix_net.c.o 7ea14bd07e90ff97
0 91 1542027815 app/app@@scrcpy@exe/src_sys_unix_command.c.o dd44ba15cc3d6a7e
1 113 1542027815 app/app@@scrcpy@exe/src_command.c.o 1eaa0f061a5c0447
0 114 1542027815 app/app@@scrcpy@exe/src_server.c.o 8b376071b5e0aaf1
1 117 1542027815 app/app@@scrcpy@exe/src_controller.c.o 907de440054c77e7
1 146 1542027815 app/app@@scrcpy@exe/src_control_event.c.o fcfa5a6c322ebf8b
91 202 1542027815 app/app@@scrcpy@exe/src_device.c.o 9ac9441f4f2e4d54
58 215 1542027815 app/app@@scrcpy@exe/src_convert.c.o 9de268e9b915094e
115 219 1542027815 app/app@@scrcpy@exe/src_fps_counter.c.o 22b968c51acd256b
114 235 1542027815 app/app@@scrcpy@exe/src_file_handler.c.o 11e303a26f189d9a
117 286 1542027815 app/app@@scrcpy@exe/src_frames.c.o 3c5c4dbee035e5ab
88 319 1542027815 app/app@@scrcpy@exe/src_decoder.c.o 60c1438cf7786895
202 338 1542027815 app/app@@scrcpy@exe/src_lock_util.c.o 9265bcc92f144427
215 367 1542027815 app/app@@scrcpy@exe/src_net.c.o 718f65aa73583163
220 408 1542027815 app/app@@scrcpy@exe/src_recorder.c.o 676a7500fb0d45cb
1 470 1542027815 app/app@@scrcpy@exe/src_tiny_xpm.c.o 91851ad29940a4b1
286 485 1542027815 app/app@@test_control_event_queue@exe/tests_test_control_event_queue.c.o 46bff52a98c0b5ca
319 487 1542027815 app/app@@test_control_event_queue@exe/src_control_event.c.o 76492af89a914173
1 488 1542027815 app/app@@scrcpy@exe/src_main.c.o e7dc8583797471c5
367 488 1542027815 app/app@@test_control_event_serialize@exe/src_control_event.c.o fcff2e1105474edf
0 497 1542027815 app/app@@scrcpy@exe/src_screen.c.o 329c18ec2111c8ff
408 515 1542027815 app/app@@test_strutil@exe/tests_test_strutil.c.o 15440f4bca20c50d
338 517 1542027815 app/app@@test_control_event_serialize@exe/tests_test_control_event_serialize.c.o baa0b48891372fcc
470 525 1542027815 app/app@@test_strutil@exe/src_str_util.c.o fcb3a91d36e23e11
525 561 1542027815 app/test_strutil 3448478dadf99adf
487 582 1542027815 app/test_control_event_queue bfca00bc894d3c4f
517 606 1542027815 app/test_control_event_serialize e06ab4ce04dd4fad
147 638 1542027815 app/app@@scrcpy@exe/src_input_manager.c.o 1fe285b256bf5908
236 713 1542027815 app/app@@scrcpy@exe/src_scrcpy.c.o 8b0bae90b272da98
713 891 1542027816 app/scrcpy 8fba96817bb2802c
485 5716 1542027820 server/scrcpy-server.jar 8511d30842df298f
0 264 1542027826 build.ninja ef03cd8523486e97
1 31 1542027826 app/app@@scrcpy@exe/src_fps_counter.c.o 22b968c51acd256b
1 44 1542027826 app/app@@scrcpy@exe/src_file_handler.c.o 11e303a26f189d9a
1 47 1542027826 app/app@@scrcpy@exe/src_controller.c.o 907de440054c77e7
2 50 1542027826 app/app@@scrcpy@exe/src_frames.c.o 3c5c4dbee035e5ab
2 50 1542027826 app/app@@scrcpy@exe/src_recorder.c.o 676a7500fb0d45cb
1 65 1542027826 app/app@@scrcpy@exe/src_decoder.c.o 60c1438cf7786895
31 82 1542027826 app/app@@scrcpy@exe/src_server.c.o 8b376071b5e0aaf1
2 108 1542027826 app/app@@scrcpy@exe/src_input_manager.c.o 1fe285b256bf5908
2 129 1542027826 app/app@@scrcpy@exe/src_screen.c.o 329c18ec2111c8ff
2 162 1542027826 app/app@@scrcpy@exe/src_scrcpy.c.o 8b0bae90b272da98
1 339 1542027826 app/app@@scrcpy@exe/src_main.c.o e7dc8583797471c5
339 538 1542027827 app/scrcpy 8fba96817bb2802c
44 753 1542027827 server/scrcpy-server.jar 8511d30842df298f
0 276 1542027871 build.ninja ef03cd8523486e97
1 37 1542027872 app/app@@scrcpy@exe/src_file_handler.c.o 11e303a26f189d9a
1 42 1542027872 app/app@@scrcpy@exe/src_controller.c.o 907de440054c77e7
1 45 1542027872 app/app@@scrcpy@exe/src_fps_counter.c.o 22b968c51acd256b
2 49 1542027872 app/app@@scrcpy@exe/src_recorder.c.o 676a7500fb0d45cb
1 52 1542027872 app/app@@scrcpy@exe/src_frames.c.o 3c5c4dbee035e5ab
0 64 1542027872 app/app@@scrcpy@exe/src_decoder.c.o 60c1438cf7786895
37 80 1542027872 app/app@@scrcpy@exe/src_server.c.o 8b376071b5e0aaf1
1 128 1542027872 app/app@@scrcpy@exe/src_input_manager.c.o 1fe285b256bf5908
2 138 1542027872 app/app@@scrcpy@exe/src_screen.c.o 329c18ec2111c8ff
2 150 1542027872 app/app@@scrcpy@exe/src_scrcpy.c.o 8b0bae90b272da98
1 370 1542027872 app/app@@scrcpy@exe/src_main.c.o e7dc8583797471c5
370 578 1542027872 app/scrcpy 8fba96817bb2802c
42 688 1542027872 server/scrcpy-server.jar 8511d30842df298f

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More