Compare commits

..

76 Commits

Author SHA1 Message Date
Simon Chan
79f9ec5801 Add workaround to capture audio on Android 11
On Android 11, it is possible to start the capture only when the running
app is in foreground. But scrcpy is not an app, it's a Java application
started from shell.

As a workaround, start an existing Android shell existing activity just
to start the capture, then close it immediately.

Co-authored-by: Romain Vimont <rom@rom1v.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
2023-02-26 22:41:07 +01:00
Romain Vimont
b7d2086508 Add audio player
Play the decoded audio using SDL.

The audio player frame sink receives the audio frames, resample them
and write them to a byte buffer (introduced by this commit).

On SDL audio callback (from an internal SDL thread), copy samples from
this byte buffer to the SDL audio buffer.

The byte buffer is protected by the SDL_AudioDeviceLock(), but it has
been designed so that the producer and the consumer may write and read
in parallel, provided that they don't access the same slices of the
ring-buffer buffer.

Co-authored-by: Simon Chan <1330321+yume-chan@users.noreply.github.com>
2023-02-26 22:41:07 +01:00
Romain Vimont
b12d1ae7cb Add two-step write feature to bytebuf
If there is exactly one producer, then it can assume that the remaining
space in the buffer will only increase until it write something.

This assumption may allow the producer to write to the buffer (up to a
known safe size) without any synchronization mechanism, thus allowing
to read and write different parts of the buffer in parallel.

The producer can then commit the write with lock held, and update its
knowledge of the safe empty remaining space.
2023-02-26 22:41:07 +01:00
Romain Vimont
28289ab881 Introduce bytebuf util
Add a ring-buffer for bytes. It will be useful for buffering audio.
2023-02-26 22:41:07 +01:00
Romain Vimont
0cf7dabfc3 Pass AVCodecContext to frame sinks
Frame consumers may need details about the frame format.
2023-02-26 22:41:07 +01:00
Romain Vimont
dc8e6c3cfe Add an audio decoder 2023-02-26 22:41:07 +01:00
Romain Vimont
cb1d98a59c Give a name to decoder instances
This will be useful in logs.
2023-02-26 22:41:07 +01:00
Romain Vimont
9400584364 Rename decoder to video_decoder 2023-02-26 22:41:07 +01:00
Romain Vimont
98c2762eaa Log display sizes in display list
This is more convenient than just the display id alone.
2023-02-26 22:41:07 +01:00
Romain Vimont
243c8cf1b3 Add --list-device-displays 2023-02-26 22:41:07 +01:00
Romain Vimont
63bc6d1053 Move log message helpers to LogUtils
This class will also contain other log helpers.
2023-02-26 22:41:07 +01:00
Romain Vimont
6926f5e4fd Quit on audio configuration failure
When audio capture fails on the device, scrcpy continue mirroring the
video stream. This allows to enable audio by default only when
supported.

However, if an audio configuration occurs (for example the user
explicitly selected an unknown audio encoder), this must be treated as
an error and scrcpy must exit.
2023-02-26 22:41:07 +01:00
Romain Vimont
004eb47d4a Add --list-encoders
Add an option to list the device encoders properly.
2023-02-26 22:41:07 +01:00
Romain Vimont
d720d424d8 Move await_for_server() logs
Print the logs on the caller side. This will allow to call the function
in another context without printing the logs.
2023-02-26 22:41:07 +01:00
Romain Vimont
43660079c5 Add --audio-encoder
Similar to --video-encoder, but for audio.
2023-02-26 22:41:07 +01:00
Romain Vimont
a5aba2948a Extract unknown encoder error message
This will allow to reuse the same code for audio encoder selection.
2023-02-26 22:41:07 +01:00
Romain Vimont
7a17895111 Add --audio-codec-options
Similar to --video-codec-options, but for audio.
2023-02-26 22:41:07 +01:00
Romain Vimont
fb2c9ef9e7 Extract application of codec options
This will allow to reuse the same code for audio codec options.
2023-02-26 22:41:07 +01:00
Romain Vimont
12d79686d1 Add support for AAC audio codec
Add option --audio-codec=aac.
2023-02-26 22:41:07 +01:00
Romain Vimont
923032e5ca Add --audio-codec
Introduce the selection mechanism. Alternative codecs will be added
later.
2023-02-26 22:41:07 +01:00
Romain Vimont
4689cd07d4 Add --audio-bit-rate
Add an option to configure the audio bit-rate.
2023-02-26 22:41:07 +01:00
Romain Vimont
d99c3da08f Disable MethodLength checkstyle on createOptions()
This method will grow as needed to initialize options.
2023-02-26 22:41:07 +01:00
Romain Vimont
4082cb32f9 Rename --encoder to --video-encoder
This prepares the introduction of --audio-encoder.
2023-02-26 22:41:07 +01:00
Romain Vimont
f3b4160d77 Rename --codec-options to --video-codec-options
This prepares the introduction of --audio-codec-options.
2023-02-26 22:41:07 +01:00
Romain Vimont
207ae8b73c Rename --bit-rate to --video-bit-rate
This prepares the introduction of --audio-bit-rate.
2023-02-26 22:41:07 +01:00
Romain Vimont
cf15859214 Rename --codec to --video-codec
This prepares the introduction of --audio-codec.
2023-02-26 22:41:07 +01:00
Romain Vimont
6cdd4e867b Remove default bit-rate on client side
If no bit-rate is passed, let the server use the default value (8Mbps).

This avoids to define a default value on both sides, and to pass the
default bit-rate as an argument when starting the server.
2023-02-26 22:41:07 +01:00
Romain Vimont
560002047b Record at least video packets on stop
If the recorder is stopped while it has not received any audio packet
yet, make sure the video stream is correctly recorded.
2023-02-26 22:41:07 +01:00
Romain Vimont
3233aa1c4f Disable audio before Android 11
The permission "android.permission.RECORD_AUDIO" has been added for
shell in Android 11.

Moreover, on lower versions, it may make the server segfault on the
device (happened on a Nexus 5 with Android 6.0.1).

Refs <4feeee8891%5E%21/>
2023-02-26 22:41:07 +01:00
Romain Vimont
a8a1da1a00 Disable audio on initialization error
By default, audio is enabled (--no-audio must be explicitly passed to
disable it).

However, some devices may not support audio capture (typically devices
below Android 11, or Android 11 when the shell application is not
foreground on start).

In that case, make the server notify the client to dynamically disable
audio forwarding so that it does not wait indefinitely for an audio
stream.

Also disable audio on unknown codec or missing decoder on the
client-side, for the same reasons.
2023-02-26 22:41:07 +01:00
Romain Vimont
230abb30c2 Add record audio support
Make the recorder accept two input sources (video and audio), and mux
them into a single file.
2023-02-26 22:41:07 +01:00
Romain Vimont
cb0f4799a2 Rename video-specific variables in recorder
This paves the way to add audio-specific variables.
2023-02-26 22:41:07 +01:00
Romain Vimont
5fc38264c0 Do not merge config audio packets
For video streams (at least H.264 and H.265), the config packet
containing SPS/PPS must be prepended to the next packet (the following
keyframe).

For audio streams (at least OPUS), they must not be merged.
2023-02-26 22:41:07 +01:00
Romain Vimont
3feae6d41b Add an audio demuxer
Add a demuxer which will read the stream from the audio socket.
2023-02-26 22:41:07 +01:00
Romain Vimont
8cf821471f Give a name to demuxer instances
This will be useful in logs.
2023-02-26 22:41:07 +01:00
Romain Vimont
bd51b342b4 Rename demuxer to video_demuxer
There will be another demuxer instance for audio.
2023-02-26 22:41:07 +01:00
Romain Vimont
fa85a128da Extract OPUS extradata
For OPUS codec, FFmpeg expects the raw extradata, but MediaCodec wraps
it in some structure.

Fix the config packet to send only the raw extradata.
2023-02-26 22:41:07 +01:00
Romain Vimont
cd099f7a2b Use a streamer to send the audio stream
Send each encoded audio packet using a streamer.
2023-02-26 22:41:07 +01:00
Romain Vimont
20042addd4 Encode recorded audio on the device
For now, the encoded packets are just logged into the console.
2023-02-26 22:41:07 +01:00
Simon Chan
4d39bb9d26 Capture device audio
Create an AudioRecorder to capture the audio source REMOTE_SUBMIX.

For now, the captured packets are just logged into the console.

Co-authored-by: Romain Vimont <rom@rom1v.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
2023-02-26 22:41:07 +01:00
Simon Chan
50d180abf2 Add a new socket for audio stream
When audio is enabled, open a new socket to send the audio stream from
the device to the client.

Co-authored-by: Romain Vimont <rom@rom1v.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
2023-02-26 22:41:07 +01:00
Simon Chan
634ba5d7c4 Add --no-audio option
Audio will be enabled by default (when supported). Add an option to
disable it.

Co-authored-by: Romain Vimont <rom@rom1v.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
2023-02-26 22:41:07 +01:00
Romain Vimont
cadf95cfc4 Use FakeContext for Application instance
This will expose the correct package name and UID to the application
context.
2023-02-26 22:41:07 +01:00
Romain Vimont
13c8209071 Use shell package name for workarounds
For consistency.
2023-02-26 22:41:07 +01:00
Romain Vimont
c21f604f30 Use ROOT_UID from FakeContext
Remove USER_ID from ServiceManager, and replace it by a constant in
FakeContext.

This is the same as android.os.Process.ROOT_UID, but this constant has
been introduced in API 29.
2023-02-26 22:41:07 +01:00
Romain Vimont
db0bf6f34b Use PACKAGE_NAME from FakeContext
Remove duplicated constant.
2023-02-26 22:41:06 +01:00
Romain Vimont
09388f5352 Use AttributionSource from FakeContext
FakeContext already provides an AttributeSource instance.

Co-authored-by: Simon Chan <1330321+yume-chan@users.noreply.github.com>
2023-02-26 22:41:06 +01:00
Simon Chan
2f3092e6b4 Add a fake Android Context
Since scrcpy-server is not an Android application (it's a java
executable), it has no Context.

Some features will require a Context instance to get the package name
and the UID. Add a FakeContext for this purpose.

Co-authored-by: Romain Vimont <rom@rom1v.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
2023-02-26 22:41:06 +01:00
Romain Vimont
0ee541fe26 Improve error message for unknown encoder
The provided encoder name depends on the selected codec. Improve the
error message and the suggestions.
2023-02-26 22:41:06 +01:00
Romain Vimont
58249715ac Rename "codec" variable to "mediaCodec"
This will allow to use "codec" for the Codec type.
2023-02-26 22:41:06 +01:00
Romain Vimont
bbf9eeadf1 Make streamer independent of codec type
Rename VideoStreamer to Streamer, and extract a Codec interface which
will also support audio codecs.
2023-02-26 22:41:06 +01:00
Romain Vimont
3b8bb5feb5 Pass all args to ScreenEncoder constructor
There is no good reason to pass some of them in the constructor and some
others as parameters of the streamScreen() method.
2023-02-26 22:41:06 +01:00
Romain Vimont
fa31aaaba8 Move screen encoder initialization
This prepares further refactors.
2023-02-26 22:41:06 +01:00
Romain Vimont
5b6ae2fef3 Write streamer header from ScreenEncoder
The screen encoder is responsible to write data to the video streamer.
2023-02-26 22:41:06 +01:00
Romain Vimont
bf1a4ae266 Use VideoStreamer directly from ScreenEncoder
The Callbacks interface notifies new packets. But in addition, the
screen encoder will need to write headers on start.

We could add a function onStart(), but for simplicity, just remove the
interface, which brings no value, and call the streamer directly.

Refs 87972e2022
2023-02-26 22:41:06 +01:00
Romain Vimont
a29da81f1a Simplify error handling on socket creation
On any error, all previously opened sockets must be closed.

Handle these errors in a single catch-block. Currently, there are only 2
sockets, but this will simplify even more with more sockets.

Note: this commit is better displayed with --ignore-space-change (-b).
2023-02-26 22:41:06 +01:00
Romain Vimont
145ba93bd3 Reorder initialization
Initialize components in the pipeline order: demuxer first, decoder and
recorder second.
2023-02-26 22:41:06 +01:00
Romain Vimont
57356a3a09 Refactor recorder logic
Process the initial config packet (necessary to write the header)
separately.
2023-02-26 22:41:06 +01:00
Romain Vimont
4b1f27bdee Move last packet recording
Write the last packet at the end.
2023-02-26 22:41:06 +01:00
Romain Vimont
e45c499358 Add start() function for recorder
For consistency with the other components, do not start the internal
thread from an init() function.
2023-02-26 22:41:06 +01:00
Romain Vimont
3f99f59394 Open recording file from the recorder thread
The recorder opened the target file from the packet sink open()
callback, called by the demuxer. Only then the recorder thread was
started.

One golden rule for the recorder is to never block the demuxer for I/O,
because it would impact mirroring. This rule is respected on recording
packets, but not for the initial recorder opening.

Therefore, start the recorder thread from sc_recorder_init(), open the
file immediately from the recorder thread, then make it wait for the
stream to start (on packet sink open()).

Now that the recorder can report errors directly (rather than making the
demuxer call fail), it is possible to report file opening error even
before the packet sink is open.
2023-02-26 22:41:06 +01:00
Romain Vimont
c976698e40 Inline packet_sink impl in recorder
Remove useless wrappers.
2023-02-26 22:41:06 +01:00
Romain Vimont
3bf4712cef Initialize recorder fields from init()
The recorder has two initialization phases: one to initialize the
concrete recorder object, and one to open its packet_sink trait.

Initialize mutex and condvar as part of the object initialization.

If there were several packet_sink traits, the mutex and condvar would
still be initialized only once.
2023-02-26 22:41:06 +01:00
Romain Vimont
31ffb6d33d Report recorder errors
Stop scrcpy on recorder errors.

It was previously indirectly stopped by the demuxer, which failed to
push packets to a recorder in error. Report it directly instead:
 - it avoids to wait for the next demuxer call;
 - it will allow to open the target file from a separate thread and stop
   immediately on any I/O error.
2023-02-26 22:41:06 +01:00
Romain Vimont
9acd03b5c3 Move previous packet to a local variable
It is only used from run_recorder().
2023-02-26 22:41:06 +01:00
Romain Vimont
cdc4b47ea2 Move pts_origin to a local variable
It is only used from run_recorder().
2023-02-26 22:41:06 +01:00
Romain Vimont
f0fce4125e Change PTS origin type from uint64_t to int64_t
It is initialized from AVPacket.pts, which is an int64_t.
2023-02-26 22:41:06 +01:00
Romain Vimont
6630c6dbb4 Fix --encoder documentation
Mention that it depends on the codec provided by --codec (which is not
necessarily H264 anymore).
2023-02-26 22:41:06 +01:00
Romain Vimont
82c7752cc4 Do not print stacktraces when unnecessary
User-friendly error messages are printed on specific configuration
exceptions. In that case, do not print the stacktrace.

Also handle the user-friendly error message directly where the error
occurs, and print multiline messages in a single log call, to avoid
confusing interleaving.
2023-02-26 22:41:06 +01:00
Romain Vimont
d2dd3bf434 Fix --no-clipboard-autosync bash completion
Fix typo.
2023-02-26 22:41:06 +01:00
Romain Vimont
061dae3790 Split server stop() and join()
For consistency with the other components, call stop() and join()
separately.

This allows to stop all components, then join them all.
2023-02-26 22:41:06 +01:00
Romain Vimont
52eeb197b3 Print FFmpeg logs
FFmpeg logs are redirected to a specific SDL log category.

Initialize the log level for this category to print them as expected.
2023-02-26 22:41:06 +01:00
Romain Vimont
693570fef1 Move FFmpeg callback initialization
Configure FFmpeg log redirection on start from a log helper.
2023-02-26 22:41:06 +01:00
Romain Vimont
346145f4bd Silence lint warning about constant in API 29
MediaFormat.MIMETYPE_VIDEO_AV1 has been added in API 29, but it is not
a problem to inline the constant in older versions.
2023-02-26 22:41:06 +01:00
Romain Vimont
d6aff0e5d7 Remove manifest package name
As reported by gradle:

> Setting the namespace via a source AndroidManifest.xml's package
> attribute is deprecated.
>
> Please instead set the namespace (or testNamespace) in the module's
> build.gradle file, as described here:
> https://developer.android.com/studio/build/configure-app-module#set-namespace
2023-02-26 22:41:06 +01:00
Romain Vimont
79d127b5f1 Upgrade gradle build tools to 7.4.0
Plugin version 7.4.0.
Gradle version 7.5.

Refs <https://developer.android.com/studio/releases/gradle-plugin#updating-gradle>
2023-02-26 22:11:37 +01:00
15 changed files with 274 additions and 119 deletions

View File

@@ -31,6 +31,7 @@ src = [
'src/version.c',
'src/video_buffer.c',
'src/util/acksync.c',
'src/util/average.c',
'src/util/bytebuf.c',
'src/util/file.c',
'src/util/intmap.c',

View File

@@ -4,11 +4,27 @@
#include "util/log.h"
/** Downcast frame_sink to sc_v4l2_sink */
#define SC_AUDIO_PLAYER_NDEBUG // comment to debug
/** Downcast frame_sink to sc_audio_player */
#define DOWNCAST(SINK) container_of(SINK, struct sc_audio_player, frame_sink)
#define SC_AV_SAMPLE_FMT AV_SAMPLE_FMT_S16
#define SC_SDL_SAMPLE_FMT AUDIO_S16
#define SC_AV_SAMPLE_FMT AV_SAMPLE_FMT_FLT
#define SC_SDL_SAMPLE_FMT AUDIO_F32
#define SC_AUDIO_OUTPUT_BUFFER_SAMPLES 480 // 10ms at 48000Hz
// The target number of buffered samples between the producer and the consumer.
// This value is directly use for compensation.
#define SC_TARGET_BUFFERED_SAMPLES (3 * SC_AUDIO_OUTPUT_BUFFER_SAMPLES)
// If the consumer is too late, skip samples to keep at most this value
#define SC_BUFFERED_SAMPLES_THRESHOLD 2400 // 50ms at 48000Hz
// Use a ring-buffer of 1 second (at 48000Hz) between the producer and the
// consumer. It too big, but it guarantees that the producer and the consumer
// will be able to access it in parallel without locking.
#define SC_BYTEBUF_SIZE_IN_SAMPLES 48000
void
sc_audio_player_sdl_callback(void *userdata, uint8_t *stream, int len_int) {
@@ -20,21 +36,49 @@ sc_audio_player_sdl_callback(void *userdata, uint8_t *stream, int len_int) {
assert(len_int > 0);
size_t len = len_int;
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGD("[Audio] SDL callback requests %" SC_PRIsizet " samples",
len / (ap->nb_channels * ap->out_bytes_per_sample));
#endif
size_t read = sc_bytebuf_read_remaining(&ap->buf);
size_t max_buffered_bytes = SC_BUFFERED_SAMPLES_THRESHOLD
* ap->nb_channels * ap->out_bytes_per_sample;
if (read > max_buffered_bytes + len) {
size_t skip = read - (max_buffered_bytes + len);
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGD("[Audio] Buffered samples threshold exceeded: %" SC_PRIsizet
" bytes, skipping %" SC_PRIsizet " bytes", read, skip);
#endif
// After this callback, exactly max_buffered_bytes will remain
sc_bytebuf_skip(&ap->buf, skip);
read = max_buffered_bytes + len;
}
// Number of buffered samples (may be negative on underflow)
float buffered_samples = ((float) read - len_int)
/ (ap->nb_channels * ap->out_bytes_per_sample);
sc_average_push(&ap->avg_buffered_samples, buffered_samples);
if (read) {
if (read > len) {
read = len;
}
sc_bytebuf_read(&ap->buf, stream, read);
}
if (read < len) {
// Insert silence
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGD("[Audio] Buffer underflow, inserting silence: %" SC_PRIsizet
" bytes", len - read);
#endif
memset(stream + read, 0, len - read);
}
}
static size_t
sc_audio_player_get_swr_buf_size(struct sc_audio_player *ap, size_t samples) {
sc_audio_player_get_buf_size(struct sc_audio_player *ap, size_t samples) {
assert(ap->nb_channels);
assert(ap->out_bytes_per_sample);
return samples * ap->nb_channels * ap->out_bytes_per_sample;
@@ -42,7 +86,7 @@ sc_audio_player_get_swr_buf_size(struct sc_audio_player *ap, size_t samples) {
static uint8_t *
sc_audio_player_get_swr_buf(struct sc_audio_player *ap, size_t min_samples) {
size_t min_buf_size = sc_audio_player_get_swr_buf_size(ap, min_samples);
size_t min_buf_size = sc_audio_player_get_buf_size(ap, min_samples);
if (min_buf_size < ap->swr_buf_alloc_size) {
size_t new_size = min_buf_size + 4096;
uint8_t *buf = realloc(ap->swr_buf, new_size);
@@ -63,8 +107,28 @@ sc_audio_player_frame_sink_open(struct sc_frame_sink *sink,
const AVCodecContext *ctx) {
struct sc_audio_player *ap = DOWNCAST(sink);
SwrContext *swr_ctx = ap->swr_ctx;
assert(swr_ctx);
SDL_AudioSpec desired = {
.freq = ctx->sample_rate,
.format = SC_SDL_SAMPLE_FMT,
.channels = ctx->ch_layout.nb_channels,
.samples = SC_AUDIO_OUTPUT_BUFFER_SAMPLES,
.callback = sc_audio_player_sdl_callback,
.userdata = ap,
};
SDL_AudioSpec obtained;
ap->device = SDL_OpenAudioDevice(NULL, 0, &desired, &obtained, 0);
if (!ap->device) {
LOGE("Could not open audio device: %s", SDL_GetError());
return false;
}
SwrContext *swr_ctx = swr_alloc();
if (!swr_ctx) {
LOG_OOM();
goto error_close_audio_device;
}
ap->swr_ctx = swr_ctx;
assert(ctx->sample_rate > 0);
assert(ctx->ch_layout.nb_channels > 0);
@@ -83,39 +147,46 @@ sc_audio_player_frame_sink_open(struct sc_frame_sink *sink,
int ret = swr_init(swr_ctx);
if (ret) {
LOGE("Failed to initialize the resampling context");
return false;
goto error_free_swr_ctx;
}
ap->sample_rate = ctx->sample_rate;
ap->nb_channels = ctx->ch_layout.nb_channels;
ap->out_bytes_per_sample = out_bytes_per_sample;
size_t initial_swr_buf_size = sc_audio_player_get_swr_buf_size(ap, 4096);
size_t bytebuf_size =
sc_audio_player_get_buf_size(ap, SC_BYTEBUF_SIZE_IN_SAMPLES);
bool ok = sc_bytebuf_init(&ap->buf, bytebuf_size);
if (!ok) {
goto error_free_swr_ctx;
}
ap->safe_empty_buffer = sc_bytebuf_write_remaining(&ap->buf);
size_t initial_swr_buf_size = sc_audio_player_get_buf_size(ap, 4096);
ap->swr_buf = malloc(initial_swr_buf_size);
if (!ap->swr_buf) {
LOG_OOM();
return false;
goto error_destroy_bytebuf;
}
ap->swr_buf_alloc_size = initial_swr_buf_size;
SDL_AudioSpec desired = {
.freq = ctx->sample_rate,
.format = SC_SDL_SAMPLE_FMT,
.channels = ctx->ch_layout.nb_channels,
.samples = 512, // ~10ms at 48000Hz
.callback = sc_audio_player_sdl_callback,
.userdata = ap,
};
SDL_AudioSpec obtained;
ap->device = SDL_OpenAudioDevice(NULL, 0, &desired, &obtained, 0);
if (!ap->device) {
LOGE("Could not open audio device: %s", SDL_GetError());
return false;
}
sc_average_init(&ap->avg_buffered_samples, 32);
ap->samples_since_resync = 0;
SDL_PauseAudioDevice(ap->device, 0);
return true;
error_destroy_bytebuf:
sc_bytebuf_destroy(&ap->buf);
error_free_swr_ctx:
swr_free(&ap->swr_ctx);
error_close_audio_device:
SDL_CloseAudioDevice(ap->device);
return false;
}
static void
@@ -125,6 +196,10 @@ sc_audio_player_frame_sink_close(struct sc_frame_sink *sink) {
assert(ap->device);
SDL_PauseAudioDevice(ap->device, 1);
SDL_CloseAudioDevice(ap->device);
free(ap->swr_buf);
sc_bytebuf_destroy(&ap->buf);
swr_free(&ap->swr_ctx);
}
static bool
@@ -148,12 +223,12 @@ sc_audio_player_frame_sink_push(struct sc_frame_sink *sink, const AVFrame *frame
LOGE("Resampling failed: %d", ret);
return false;
}
LOGI("ret=%d dst_nb_samples=%d\n", ret, dst_nb_samples);
size_t swr_buf_size = sc_audio_player_get_swr_buf_size(ap, ret);
LOGI("== swr_buf_size %lu", swr_buf_size);
// TODO clock drift compensation
size_t samples_written = ret;
size_t swr_buf_size = sc_audio_player_get_buf_size(ap, samples_written);
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGI("[Audio] %" SC_PRIsizet " samples written to buffer", samples_written);
#endif
// It should almost always be possible to write without lock
bool can_write_without_lock = swr_buf_size <= ap->safe_empty_buffer;
@@ -170,36 +245,39 @@ sc_audio_player_frame_sink_push(struct sc_frame_sink *sink, const AVFrame *frame
// The next time, it will remain at least the current empty space
ap->safe_empty_buffer = sc_bytebuf_write_remaining(&ap->buf);
// Read the value written by the SDL thread under lock
float avg;
bool has_avg = sc_average_get(&ap->avg_buffered_samples, &avg);
SDL_UnlockAudioDevice(ap->device);
if (has_avg) {
ap->samples_since_resync += samples_written;
if (ap->samples_since_resync >= ap->sample_rate) {
// Resync every second
ap->samples_since_resync = 0;
int diff = SC_TARGET_BUFFERED_SAMPLES - avg;
#ifndef SC_AUDIO_PLAYER_NDEBUG
LOGI("[Audio] Average buffered samples = %f, compensation %d",
avg, diff);
#endif
// Compensate the diff over 3 seconds (but will be recomputed after
// 1 second)
int ret = swr_set_compensation(swr_ctx, diff, 3 * ap->sample_rate);
if (ret < 0) {
LOGW("Resampling compensation failed: %d", ret);
// not fatal
}
}
}
return true;
}
bool
sc_audio_player_init(struct sc_audio_player *ap,
const struct sc_audio_player_callbacks *cbs,
void *cbs_userdata) {
bool ok = sc_bytebuf_init(&ap->buf, 128 * 1024);
if (!ok) {
return false;
}
ap->swr_ctx = swr_alloc();
if (!ap->swr_ctx) {
sc_bytebuf_destroy(&ap->buf);
LOG_OOM();
return false;
}
ap->safe_empty_buffer = sc_bytebuf_write_remaining(&ap->buf);
ap->swr_buf = NULL;
ap->swr_buf_alloc_size = 0;
assert(cbs && cbs->on_ended);
ap->cbs = cbs;
ap->cbs_userdata = cbs_userdata;
void
sc_audio_player_init(struct sc_audio_player *ap) {
static const struct sc_frame_sink_ops ops = {
.open = sc_audio_player_frame_sink_open,
.close = sc_audio_player_frame_sink_close,
@@ -207,12 +285,4 @@ sc_audio_player_init(struct sc_audio_player *ap,
};
ap->frame_sink.ops = &ops;
return true;
}
void
sc_audio_player_destroy(struct sc_audio_player *ap) {
sc_bytebuf_destroy(&ap->buf);
swr_free(&ap->swr_ctx);
free(ap->swr_buf);
}

View File

@@ -5,6 +5,7 @@
#include <stdbool.h>
#include "trait/frame_sink.h"
#include <util/average.h>
#include <util/bytebuf.h>
#include <util/thread.h>
@@ -35,6 +36,10 @@ struct sc_audio_player {
uint8_t *swr_buf;
size_t swr_buf_alloc_size;
// Number of buffered samples (may be negative on underflow)
struct sc_average avg_buffered_samples;
unsigned samples_since_resync;
const struct sc_audio_player_callbacks *cbs;
void *cbs_userdata;
};
@@ -43,12 +48,7 @@ struct sc_audio_player_callbacks {
void (*on_ended)(struct sc_audio_player *ap, bool success, void *userdata);
};
bool
sc_audio_player_init(struct sc_audio_player *ap,
const struct sc_audio_player_callbacks *cbs,
void *cbs_userdata);
void
sc_audio_player_destroy(struct sc_audio_player *ap);
sc_audio_player_init(struct sc_audio_player *ap);
#endif

View File

@@ -217,17 +217,6 @@ sc_recorder_on_ended(struct sc_recorder *recorder, bool success,
}
}
static void
sc_audio_player_on_ended(struct sc_audio_player *ap, bool success,
void *userdata) {
(void) ap;
(void) userdata;
if (!success) {
// TODO
}
}
static void
sc_video_demuxer_on_ended(struct sc_demuxer *demuxer, bool eos,
void *userdata) {
@@ -314,7 +303,6 @@ scrcpy(struct scrcpy_options *options) {
bool file_pusher_initialized = false;
bool recorder_initialized = false;
bool recorder_started = false;
bool audio_player_initialized = false;
#ifdef HAVE_V4L2
bool v4l2_sink_initialized = false;
#endif
@@ -686,15 +674,7 @@ aoa_hid_end:
sc_decoder_add_sink(&s->video_decoder, &s->screen.frame_sink);
if (options->audio) {
static const struct sc_audio_player_callbacks audio_player_cbs = {
.on_ended = sc_audio_player_on_ended,
};
if (!sc_audio_player_init(&s->audio_player,
&audio_player_cbs, NULL)) {
goto end;
}
audio_player_initialized = true;
sc_audio_player_init(&s->audio_player);
sc_decoder_add_sink(&s->audio_decoder, &s->audio_player.frame_sink);
}
}
@@ -817,10 +797,6 @@ end:
sc_recorder_destroy(&s->recorder);
}
if (audio_player_initialized) {
sc_audio_player_destroy(&s->audio_player);
}
if (file_pusher_initialized) {
sc_file_pusher_join(&s->file_pusher);
sc_file_pusher_destroy(&s->file_pusher);

View File

@@ -338,9 +338,9 @@ sc_v4l2_sink_push(struct sc_v4l2_sink *vs, const AVFrame *frame) {
}
static bool
sc_v4l2_frame_sink_open(struct sc_frame_sink *sink) {
sc_v4l2_frame_sink_open(struct sc_frame_sink *sink, const AVCodecContext *ctx) {
struct sc_v4l2_sink *vs = DOWNCAST(sink);
return sc_v4l2_sink_open(vs);
return sc_v4l2_sink_open(vs, ctx);
}
static void

View File

@@ -7,7 +7,7 @@ buildscript {
mavenCentral()
}
dependencies {
classpath 'com.android.tools.build:gradle:7.2.2'
classpath 'com.android.tools.build:gradle:7.4.0'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files

View File

@@ -1,5 +1,5 @@
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-7.3.3-all.zip
distributionUrl=https\://services.gradle.org/distributions/gradle-7.5-all.zip
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists

View File

@@ -1,6 +1,7 @@
apply plugin: 'com.android.application'
android {
namespace 'com.genymobile.scrcpy'
compileSdkVersion 33
defaultConfig {
applicationId "com.genymobile.scrcpy"

View File

@@ -1,2 +1,2 @@
<!-- not a real Android application, it is run by app_process manually -->
<manifest package="com.genymobile.scrcpy"/>
<manifest />

View File

@@ -1,7 +1,11 @@
package com.genymobile.scrcpy;
import com.genymobile.scrcpy.wrappers.ServiceManager;
import android.annotation.SuppressLint;
import android.annotation.TargetApi;
import android.content.ComponentName;
import android.content.Intent;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.AudioTimestamp;
@@ -12,6 +16,7 @@ import android.os.Build;
import android.os.Handler;
import android.os.HandlerThread;
import android.os.Looper;
import android.os.SystemClock;
import java.io.IOException;
import java.nio.ByteBuffer;
@@ -40,10 +45,13 @@ public final class AudioEncoder {
}
private static final int SAMPLE_RATE = 48000;
private static final int CHANNEL_CONFIG = AudioFormat.CHANNEL_IN_STEREO;
private static final int CHANNELS = 2;
private static final int FORMAT = AudioFormat.ENCODING_PCM_16BIT;
private static final int BYTES_PER_SAMPLE = 2;
private static final int BUFFER_MS = 10; // milliseconds
private static final int BUFFER_SIZE = SAMPLE_RATE * CHANNELS * BUFFER_MS / 1000;
private static final int BUFFER_MS = 5; // milliseconds
private static final int BUFFER_SIZE = SAMPLE_RATE * CHANNELS * BYTES_PER_SAMPLE * BUFFER_MS / 1000;
private final Streamer streamer;
private final int bitRate;
@@ -72,9 +80,9 @@ public final class AudioEncoder {
private static AudioFormat createAudioFormat() {
AudioFormat.Builder builder = new AudioFormat.Builder();
builder.setEncoding(AudioFormat.ENCODING_PCM_16BIT);
builder.setEncoding(FORMAT);
builder.setSampleRate(SAMPLE_RATE);
builder.setChannelMask(CHANNELS == 2 ? AudioFormat.CHANNEL_IN_STEREO : AudioFormat.CHANNEL_IN_MONO);
builder.setChannelMask(CHANNEL_CONFIG);
return builder.build();
}
@@ -88,7 +96,8 @@ public final class AudioEncoder {
}
builder.setAudioSource(MediaRecorder.AudioSource.REMOTE_SUBMIX);
builder.setAudioFormat(createAudioFormat());
builder.setBufferSizeInBytes(1024 * 1024);
int minBufferSize = AudioRecord.getMinBufferSize(SAMPLE_RATE, CHANNEL_CONFIG, FORMAT);
builder.setBufferSizeInBytes(minBufferSize);
return builder.build();
}
@@ -211,6 +220,32 @@ public final class AudioEncoder {
}
}
private static void startWorkaroundAndroid11() {
if (Build.VERSION.SDK_INT == Build.VERSION_CODES.R) {
// Android 11 requires Apps to be at foreground to record audio.
// Normally, each App has its own user ID, so Android checks whether the requesting App has the user ID that's at the foreground.
// But Scrcpy server is NOT an App, it's a Java application started from Android shell, so it has the same user ID (2000) with Android
// shell ("com.android.shell").
// If there is an Activity from Android shell running at foreground, then the permission system will believe Scrcpy is also in the
// foreground.
if (Build.VERSION.SDK_INT == Build.VERSION_CODES.R) {
Intent intent = new Intent(Intent.ACTION_MAIN);
intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
intent.addCategory(Intent.CATEGORY_LAUNCHER);
intent.setComponent(new ComponentName(FakeContext.PACKAGE_NAME, "com.android.shell.HeapDumpActivity"));
ServiceManager.getActivityManager().startActivityAsUserWithFeature(intent);
// Wait for activity to start
SystemClock.sleep(150);
}
}
}
private static void stopWorkaroundAndroid11() {
if (Build.VERSION.SDK_INT == Build.VERSION_CODES.R) {
ServiceManager.getActivityManager().forceStopPackage(FakeContext.PACKAGE_NAME);
}
}
@TargetApi(Build.VERSION_CODES.M)
public void encode() throws IOException {
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.R) {
@@ -228,7 +263,6 @@ public final class AudioEncoder {
try {
Codec codec = streamer.getCodec();
mediaCodec = createMediaCodec(codec, encoderName);
recorder = createAudioRecord();
mediaCodecThread = new HandlerThread("AudioEncoder");
mediaCodecThread.start();
@@ -237,7 +271,19 @@ public final class AudioEncoder {
mediaCodec.setCallback(new EncoderCallback(), new Handler(mediaCodecThread.getLooper()));
mediaCodec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
recorder.startRecording();
startWorkaroundAndroid11();
try {
recorder = createAudioRecord();
recorder.startRecording();
} catch (UnsupportedOperationException e) {
if (Build.VERSION.SDK_INT == Build.VERSION_CODES.R) {
Ln.e("Failed to start audio capture");
Ln.e("On Android 11, it is only possible to capture in foreground, make sure that the device is unlocked when starting scrcpy.");
throw new ConfigurationException("Unsupported audio capture");
}
} finally {
stopWorkaroundAndroid11();
}
recorderStarted = true;
final MediaCodec mediaCodecRef = mediaCodec;

View File

@@ -9,6 +9,7 @@ import android.os.Process;
public final class FakeContext extends ContextWrapper {
public static final String PACKAGE_NAME = "com.android.shell";
public static final int ROOT_UID = 0; // Like android.os.Process.ROOT_UID, but before API 29
private static final FakeContext INSTANCE = new FakeContext();

View File

@@ -1,10 +1,12 @@
package com.genymobile.scrcpy;
import android.annotation.SuppressLint;
import android.media.MediaFormat;
public enum VideoCodec implements Codec {
H264(0x68_32_36_34, "h264", MediaFormat.MIMETYPE_VIDEO_AVC),
H265(0x68_32_36_35, "h265", MediaFormat.MIMETYPE_VIDEO_HEVC),
@SuppressLint("InlinedApi") // introduced in API 21
AV1(0x00_61_76_31, "av1", MediaFormat.MIMETYPE_VIDEO_AV1);
private final int id; // 4-byte ASCII representation of the name

View File

@@ -1,22 +1,30 @@
package com.genymobile.scrcpy.wrappers;
import com.genymobile.scrcpy.FakeContext;
import com.genymobile.scrcpy.Ln;
import android.annotation.SuppressLint;
import android.annotation.TargetApi;
import android.content.Intent;
import android.os.Binder;
import android.os.Build;
import android.os.Bundle;
import android.os.IBinder;
import android.os.IInterface;
import android.os.Process;
import java.lang.reflect.Field;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
@SuppressLint("PrivateApi,DiscouragedPrivateApi")
public class ActivityManager {
private final IInterface manager;
private Method getContentProviderExternalMethod;
private boolean getContentProviderExternalMethodNewVersion = true;
private Method removeContentProviderExternalMethod;
private Method startActivityAsUserWithFeatureMethod;
private Method forceStopPackageMethod;
public ActivityManager(IInterface manager) {
this.manager = manager;
@@ -43,16 +51,17 @@ public class ActivityManager {
return removeContentProviderExternalMethod;
}
@TargetApi(Build.VERSION_CODES.Q)
private ContentProvider getContentProviderExternal(String name, IBinder token) {
try {
Method method = getGetContentProviderExternalMethod();
Object[] args;
if (getContentProviderExternalMethodNewVersion) {
// new version
args = new Object[]{name, Process.ROOT_UID, token, null};
args = new Object[]{name, FakeContext.ROOT_UID, token, null};
} else {
// old version
args = new Object[]{name, Process.ROOT_UID, token};
args = new Object[]{name, FakeContext.ROOT_UID, token};
}
// ContentProviderHolder providerHolder = getContentProviderExternal(...);
Object providerHolder = method.invoke(manager, args);
@@ -85,4 +94,55 @@ public class ActivityManager {
public ContentProvider createSettingsProvider() {
return getContentProviderExternal("settings", new Binder());
}
private Method getStartActivityAsUserWithFeatureMethod() throws NoSuchMethodException, ClassNotFoundException {
if (startActivityAsUserWithFeatureMethod == null) {
Class<?> iApplicationThreadClass = Class.forName("android.app.IApplicationThread");
Class<?> profilerInfo = Class.forName("android.app.ProfilerInfo");
startActivityAsUserWithFeatureMethod = manager.getClass()
.getMethod("startActivityAsUserWithFeature", iApplicationThreadClass, String.class, String.class, Intent.class, String.class,
IBinder.class, String.class, int.class, int.class, profilerInfo, Bundle.class, int.class);
}
return startActivityAsUserWithFeatureMethod;
}
@SuppressWarnings("ConstantConditions")
public int startActivityAsUserWithFeature(Intent intent) {
try {
Method method = getStartActivityAsUserWithFeatureMethod();
return (int) method.invoke(
/* this */ manager,
/* caller */ null,
/* callingPackage */ FakeContext.PACKAGE_NAME,
/* callingFeatureId */ null,
/* intent */ intent,
/* resolvedType */ null,
/* resultTo */ null,
/* resultWho */ null,
/* requestCode */ 0,
/* startFlags */ 0,
/* profilerInfo */ null,
/* bOptions */ null,
/* userId */ /* UserHandle.USER_CURRENT */ -2);
} catch (Throwable e) {
Ln.e("Could not invoke method", e);
return 0;
}
}
private Method getForceStopPackageMethod() throws NoSuchMethodException {
if (forceStopPackageMethod == null) {
forceStopPackageMethod = manager.getClass().getMethod("forceStopPackage", String.class, int.class);
}
return forceStopPackageMethod;
}
public void forceStopPackage(String packageName) {
try {
Method method = getForceStopPackageMethod();
method.invoke(manager, packageName, /* userId */ /* UserHandle.USER_CURRENT */ -2);
} catch (Throwable e) {
Ln.e("Could not invoke method", e);
}
}
}

View File

@@ -7,7 +7,6 @@ import android.content.ClipData;
import android.content.IOnPrimaryClipChangedListener;
import android.os.Build;
import android.os.IInterface;
import android.os.Process;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
@@ -63,9 +62,9 @@ public class ClipboardManager {
return (ClipData) method.invoke(manager, FakeContext.PACKAGE_NAME);
}
if (alternativeMethod) {
return (ClipData) method.invoke(manager, FakeContext.PACKAGE_NAME, null, Process.ROOT_UID);
return (ClipData) method.invoke(manager, FakeContext.PACKAGE_NAME, null, FakeContext.ROOT_UID);
}
return (ClipData) method.invoke(manager, FakeContext.PACKAGE_NAME, Process.ROOT_UID);
return (ClipData) method.invoke(manager, FakeContext.PACKAGE_NAME, FakeContext.ROOT_UID);
}
private static void setPrimaryClip(Method method, boolean alternativeMethod, IInterface manager, ClipData clipData)
@@ -73,9 +72,9 @@ public class ClipboardManager {
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.Q) {
method.invoke(manager, clipData, FakeContext.PACKAGE_NAME);
} else if (alternativeMethod) {
method.invoke(manager, clipData, FakeContext.PACKAGE_NAME, null, Process.ROOT_UID);
method.invoke(manager, clipData, FakeContext.PACKAGE_NAME, null, FakeContext.ROOT_UID);
} else {
method.invoke(manager, clipData, FakeContext.PACKAGE_NAME, Process.ROOT_UID);
method.invoke(manager, clipData, FakeContext.PACKAGE_NAME, FakeContext.ROOT_UID);
}
}
@@ -110,9 +109,9 @@ public class ClipboardManager {
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.Q) {
method.invoke(manager, listener, FakeContext.PACKAGE_NAME);
} else if (alternativeMethod) {
method.invoke(manager, listener, FakeContext.PACKAGE_NAME, null, Process.ROOT_UID);
method.invoke(manager, listener, FakeContext.PACKAGE_NAME, null, FakeContext.ROOT_UID);
} else {
method.invoke(manager, listener, FakeContext.PACKAGE_NAME, Process.ROOT_UID);
method.invoke(manager, listener, FakeContext.PACKAGE_NAME, FakeContext.ROOT_UID);
}
}

View File

@@ -9,7 +9,6 @@ import android.content.AttributionSource;
import android.os.Build;
import android.os.Bundle;
import android.os.IBinder;
import android.os.Process;
import java.io.Closeable;
import java.lang.reflect.InvocationTargetException;
@@ -139,7 +138,7 @@ public class ContentProvider implements Closeable {
public String getValue(String table, String key) throws SettingsException {
String method = getGetMethod(table);
Bundle arg = new Bundle();
arg.putInt(CALL_METHOD_USER_KEY, Process.ROOT_UID);
arg.putInt(CALL_METHOD_USER_KEY, FakeContext.ROOT_UID);
try {
Bundle bundle = call(method, key, arg);
if (bundle == null) {
@@ -155,7 +154,7 @@ public class ContentProvider implements Closeable {
public void putValue(String table, String key, String value) throws SettingsException {
String method = getPutMethod(table);
Bundle arg = new Bundle();
arg.putInt(CALL_METHOD_USER_KEY, Process.ROOT_UID);
arg.putInt(CALL_METHOD_USER_KEY, FakeContext.ROOT_UID);
arg.putString(NAME_VALUE_TABLE_VALUE, value);
try {
call(method, key, arg);