Compare commits

...

22 Commits

Author SHA1 Message Date
Romain Vimont
5c2f134292 Log PTS fixing in debug mode
Audio PTS are retrieved by AudioRecord.getTimestamp(), so they do not
necessarily match exactly the number of samples (this allows to take
drift and lag into account).

In particular, the Matroska muxer uses a timebase of 1/1000 (1 ms
precision), so two consecutive timestamps in microseconds may sometimes
end up within the same millisecond, causing the warning.

Since it is "expected", lower the log level from warning to debug.
2023-11-14 18:59:23 +01:00
Romain Vimont
ef9dc85da4 Add support for RAW audio (WAV) recording
RAW audio forwarding was supported but not for recording.

Add support for recording a raw audio stream to a `.wav` file (and
`.mkv`).
2023-11-14 18:59:23 +01:00
Romain Vimont
2722865ce6 Upgrade FFmpeg build to 6.1-scrcpy-2
Use a build with WAV muxer.
2023-11-14 09:42:25 +01:00
Romain Vimont
bb0e51d6fc Fix audio PTS by the duration of 1 sample
If the difference of PTS between two consecutive blocks of audio is less
than 1 sample, then it will be considered as non-increasing by FFmpeg
muxers having a time_base of 1/sample_rate.

Increase the PTS by 1 sample instead.
2023-11-14 09:41:56 +01:00
Romain Vimont
0da94d0742 Compute PTS of intermediate blocks
If several reads are performed for a single captured audio block (e.g.
if the read size is smaller than the captured block), then the provided
timestamp was the same for all packets.

Recompute the timestamp for each of them.
2023-11-14 09:15:34 +01:00
Romain Vimont
e3520ecd50 Read audio by blocks of 1024 samples
In practice, the system captures audio samples by blocks of 1024
samples.

Remplace the hardcoded value of 5 milliseconds (240 samples), and let
AudioRecord fill the input buffer provided by MediaCodec (or by
AudioRawRecorder), with a maximum size of 1024 samples (just in case).
2023-11-14 09:11:29 +01:00
Romain Vimont
8704548274 Increase default audio buffer for FLAC
FLAC is not low latency: the default encoder produces blocks of 4096
samples, which represent ~85.333ms.

Increase the audio buffer by default so that audio playback works.
2023-11-14 09:11:29 +01:00
megapro17
a4cbc2842d Add support for FLAC audio codec
PR #4410 <#https://github.com/Genymobile/scrcpy/pull/4410>

Co-authored-by: Romain Vimont <rom@rom1v.com>
Signed-off-by: Romain Vimont <rom@rom1v.com>
2023-11-14 09:11:27 +01:00
Romain Vimont
81d494c1a4 Upgrade FFmpeg build to 6.1-scrcpy
Upgrade to FFmpeg 6.1, and with FLAC support enabled.
2023-11-14 09:09:21 +01:00
Romain Vimont
387f40b168 Fix OPUS packet in an endian-independent way
Reading the header id as an int assumed that the current endianness was
little endian. Read to a byte array to remove this assumption.
2023-11-14 09:09:21 +01:00
Romain Vimont
e637feba51 Update muxers documentation
Recording now supports formats other than mp4 and mkv.
2023-11-14 09:08:24 +01:00
Romain Vimont
5e59ed3135 Always initialize SDL with the video subsystem
Clipboard synchronization requires SDL_INIT_VIDEO, so always initialize
the video subsystem, even if --no-video or --no-video-playback is
passed.

Refs caf594c90e
Fixes #4418 <https://github.com/Genymobile/scrcpy/issues/4418>
2023-11-11 11:41:15 +01:00
Romain Vimont
4eb33054cd Do not log EPIPE on close for raw audio
Handle EPIPE the same way in AudioRawRecorder as in AudioEncoder.

This prevents useless errors on close.
2023-11-11 11:24:47 +01:00
Romain Vimont
420d3a40dd Fix error handling in raw audio recorder
It is incorret to ever call:

    streamer.writeDisableStream(...);

after:

    streamer.writeAudioHeader();

Move the try-catch block so that it can never happen.
2023-11-11 11:24:47 +01:00
Romain Vimont
9d5f53caa7 Stop capture on any RAW audio error
The server was stopped only if an IOException occurred during RAW audio
capture, but it did not catch RuntimeExceptions.
2023-11-11 11:24:47 +01:00
Romain Vimont
3c45625324 Log recording RAW audio codec as error
It is not possible to record with a RAW audio codec, so the log before
exiting should be an error rather than a warning.
2023-11-11 11:24:47 +01:00
Romain Vimont
11d738321f Recover on invalid camera FPS ranges
Some devices may provide invalid ranges, causing an
IllegalArgumentException "lower must be less than or equal to upper".

Catch the exception to list the cameras anyway.

Refs #4403 <https://github.com/Genymobile/scrcpy/issues/4403>
2023-11-05 21:45:15 +01:00
Romain Vimont
ccaa832f48 Simplify --list-cameras output
Remove --video-source=camera from the output of --list-cameras (this is
implicit).
2023-11-05 21:44:33 +01:00
Romain Vimont
4e4ddc499f Return the FakeContext as application context
This avoids getApplicationContext() to return null and cause
NullPointerException.

Fixes #4392 <https://github.com/Genymobile/scrcpy/issues/4392#issuecomment-1792806080>
2023-11-03 19:07:15 +01:00
Romain Vimont
8d76b3e06d Fill application context for camera
Using the camera fails on some devices without a proper application
context.

Fixes #4392 <https://github.com/Genymobile/scrcpy/issues/4392>
2023-11-03 19:07:08 +01:00
Romain Vimont
85a0b935c9 Always assign a system context as base context
FakeContext used ActivityThread.getSystemContext() as base context only
in some cases, because it caused problems on some devices:
 - warnings on Xiaomi devices [1], which are now fixed by
   b8c5853aa6
 - issues related to Looper [2], which are solved by just calling
   Looper.prepare*()

Therefore, we can now always assign a base context, which simplifies and
helps to solve camera issues on some devices (#4392).

[1] <https://github.com/Genymobile/scrcpy/issues/4015#issuecomment-1595382142>
[2] <https://github.com/Genymobile/scrcpy/issues/3805#issuecomment-1596148031>

Fixes #4392 <https://github.com/Genymobile/scrcpy/issues/4392>
2023-11-03 19:05:50 +01:00
Romain Vimont
8c3e2bae7b Simplify Application instantiation
The constructor is public.
2023-11-03 19:05:28 +01:00
25 changed files with 273 additions and 129 deletions

View File

@@ -97,7 +97,7 @@ _scrcpy() {
return
;;
--audio-codec)
COMPREPLY=($(compgen -W 'opus aac raw' -- "$cur"))
COMPREPLY=($(compgen -W 'opus aac flac raw' -- "$cur"))
return
;;
--video-source)
@@ -125,7 +125,7 @@ _scrcpy() {
return
;;
--record-format)
COMPREPLY=($(compgen -W 'mkv mp4' -- "$cur"))
COMPREPLY=($(compgen -W 'mp4 mkv m4a mka opus aac flac wav' -- "$cur"))
return
;;
--render-driver)

View File

@@ -11,7 +11,7 @@ arguments=(
'--always-on-top[Make scrcpy window always on top \(above other windows\)]'
'--audio-bit-rate=[Encode the audio at the given bit-rate]'
'--audio-buffer=[Configure the audio buffering delay (in milliseconds)]'
'--audio-codec=[Select the audio codec]:codec:(opus aac raw)'
'--audio-codec=[Select the audio codec]:codec:(opus aac flac raw)'
'--audio-codec-options=[Set a list of comma-separated key\:type=value options for the device audio encoder]'
'--audio-encoder=[Use a specific MediaCodec audio encoder]'
'--audio-source=[Select the audio source]:source:(output mic)'
@@ -65,7 +65,7 @@ arguments=(
'--push-target=[Set the target directory for pushing files to the device by drag and drop]'
{-r,--record=}'[Record screen to file]:record file:_files'
'--raw-key-events[Inject key events for all input keys, and ignore text events]'
'--record-format=[Force recording format]:format:(mp4 mkv)'
'--record-format=[Force recording format]:format:(mp4 mkv m4a mka opus aac flac wav)'
'--render-driver=[Request SDL to use the given render driver]:driver name:(direct3d opengl opengles2 opengles metal software)'
'--require-audio=[Make scrcpy fail if audio is enabled but does not work]'
'--rotation=[Set the initial display rotation]:rotation values:(0 1 2 3)'

View File

@@ -6,11 +6,11 @@ cd "$DIR"
mkdir -p "$PREBUILT_DATA_DIR"
cd "$PREBUILT_DATA_DIR"
VERSION=6.0-scrcpy-4
VERSION=6.1-scrcpy-2
DEP_DIR="ffmpeg-$VERSION"
FILENAME="$DEP_DIR".7z
SHA256SUM=39274b321491ce83e76cab5d24e7cbe3f402d3ccf382f739b13be5651c146b60
SHA256SUM=7f25f638dc24a0f5d4af07a088b6a604cf33548900bbfd2f6ce0bae050b7664d
if [[ -d "$DEP_DIR" ]]
then

View File

@@ -35,7 +35,7 @@ Default is 50.
.TP
.BI "\-\-audio\-codec " name
Select an audio codec (opus, aac or raw).
Select an audio codec (opus, aac, flac or raw).
Default is opus.
@@ -347,7 +347,7 @@ Record screen to
The format is determined by the
.B \-\-record\-format
option if set, or by the file extension (.mp4 or .mkv).
option if set, or by the file extension.
.TP
.B \-\-raw\-key\-events
@@ -355,7 +355,7 @@ Inject key events for all input keys, and ignore text events.
.TP
.BI "\-\-record\-format " format
Force recording format (either mp4 or mkv).
Force recording format (mp4, mkv, m4a, mka, opus, aac, flac or wav).
.TP
.BI "\-\-render\-driver " name

View File

@@ -152,7 +152,7 @@ static const struct sc_option options[] = {
.longopt_id = OPT_AUDIO_CODEC,
.longopt = "audio-codec",
.argdesc = "name",
.text = "Select an audio codec (opus, aac or raw).\n"
.text = "Select an audio codec (opus, aac, flac or raw).\n"
"Default is opus.",
},
{
@@ -583,7 +583,7 @@ static const struct sc_option options[] = {
.argdesc = "file.mp4",
.text = "Record screen to file.\n"
"The format is determined by the --record-format option if "
"set, or by the file extension (.mp4 or .mkv).",
"set, or by the file extension.",
},
{
.longopt_id = OPT_RAW_KEY_EVENTS,
@@ -594,7 +594,8 @@ static const struct sc_option options[] = {
.longopt_id = OPT_RECORD_FORMAT,
.longopt = "record-format",
.argdesc = "format",
.text = "Force recording format (either mp4 or mkv).",
.text = "Force recording format (mp4, mkv, m4a, mka, opus, aac, flac "
"or wav).",
},
{
.longopt_id = OPT_RENDER_DRIVER,
@@ -1626,6 +1627,12 @@ get_record_format(const char *name) {
if (!strcmp(name, "aac")) {
return SC_RECORD_FORMAT_AAC;
}
if (!strcmp(name, "flac")) {
return SC_RECORD_FORMAT_FLAC;
}
if (!strcmp(name, "wav")) {
return SC_RECORD_FORMAT_WAV;
}
return 0;
}
@@ -1695,11 +1702,15 @@ parse_audio_codec(const char *optarg, enum sc_codec *codec) {
*codec = SC_CODEC_AAC;
return true;
}
if (!strcmp(optarg, "flac")) {
*codec = SC_CODEC_FLAC;
return true;
}
if (!strcmp(optarg, "raw")) {
*codec = SC_CODEC_RAW;
return true;
}
LOGE("Unsupported audio codec: %s (expected opus, aac or raw)", optarg);
LOGE("Unsupported audio codec: %s (expected opus, aac, flac or raw)", optarg);
return false;
}
@@ -2257,6 +2268,19 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
opts->require_audio = true;
}
if (opts->audio_playback && opts->audio_buffer == -1) {
if (opts->audio_codec == SC_CODEC_FLAC) {
// Use 50 ms audio buffer by default, but use a higher value for FLAC,
// which is not low latency (the default encoder produces blocks of
// 4096 samples, which represent ~85.333ms).
LOGI("FLAC audio: audio buffer increased to 120 ms (use "
"--audio-buffer to set a custom value)");
opts->audio_buffer = SC_TICK_FROM_MS(120);
} else {
opts->audio_buffer = SC_TICK_FROM_MS(50);
}
}
#ifdef HAVE_V4L2
if (v4l2) {
if (opts->lock_video_orientation ==
@@ -2352,11 +2376,6 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
}
}
if (opts->audio_codec == SC_CODEC_RAW) {
LOGW("Recording does not support RAW audio codec");
return false;
}
if (opts->video
&& sc_record_format_is_audio_only(opts->record_format)) {
LOGE("Audio container does not support video stream");
@@ -2376,6 +2395,30 @@ parse_args_with_getopt(struct scrcpy_cli_args *args, int argc, char *argv[],
"(try with --audio-codec=aac)");
return false;
}
if (opts->record_format == SC_RECORD_FORMAT_FLAC
&& opts->audio_codec != SC_CODEC_FLAC) {
LOGE("Recording to FLAC file requires a FLAC audio stream "
"(try with --audio-codec=flac)");
return false;
}
if (opts->record_format == SC_RECORD_FORMAT_WAV
&& opts->audio_codec != SC_CODEC_RAW) {
LOGE("Recording to WAV file requires a RAW audio stream "
"(try with --audio-codec=raw)");
return false;
}
if ((opts->record_format == SC_RECORD_FORMAT_MP4 ||
opts->record_format == SC_RECORD_FORMAT_M4A)
&& opts->audio_codec == SC_CODEC_RAW) {
LOGE("Recording to MP4 container does not support RAW audio");
return false;
}
}
if (opts->audio_codec == SC_CODEC_FLAC && opts->audio_bit_rate) {
LOGW("--audio-bit-rate is ignored for FLAC audio codec");
}
if (opts->audio_codec == SC_CODEC_RAW) {

View File

@@ -25,7 +25,8 @@ sc_demuxer_to_avcodec_id(uint32_t codec_id) {
#define SC_CODEC_ID_H265 UINT32_C(0x68323635) // "h265" in ASCII
#define SC_CODEC_ID_AV1 UINT32_C(0x00617631) // "av1" in ASCII
#define SC_CODEC_ID_OPUS UINT32_C(0x6f707573) // "opus" in ASCII
#define SC_CODEC_ID_AAC UINT32_C(0x00616163) // "aac in ASCII"
#define SC_CODEC_ID_AAC UINT32_C(0x00616163) // "aac" in ASCII
#define SC_CODEC_ID_FLAC UINT32_C(0x666c6163) // "flac" in ASCII
#define SC_CODEC_ID_RAW UINT32_C(0x00726177) // "raw" in ASCII
switch (codec_id) {
case SC_CODEC_ID_H264:
@@ -43,6 +44,8 @@ sc_demuxer_to_avcodec_id(uint32_t codec_id) {
return AV_CODEC_ID_OPUS;
case SC_CODEC_ID_AAC:
return AV_CODEC_ID_AAC;
case SC_CODEC_ID_FLAC:
return AV_CODEC_ID_FLAC;
case SC_CODEC_ID_RAW:
return AV_CODEC_ID_PCM_S16LE;
default:
@@ -207,6 +210,11 @@ run_demuxer(void *data) {
codec_ctx->channels = 2;
#endif
codec_ctx->sample_rate = 48000;
if (raw_codec_id == SC_CODEC_ID_FLAC) {
// The sample_fmt is not set by the FLAC decoder
codec_ctx->sample_fmt = AV_SAMPLE_FMT_S16;
}
}
if (avcodec_open2(codec_ctx, codec, NULL) < 0) {

View File

@@ -46,7 +46,7 @@ const struct scrcpy_options scrcpy_options_default = {
.window_height = 0,
.display_id = 0,
.display_buffer = 0,
.audio_buffer = SC_TICK_FROM_MS(50),
.audio_buffer = -1, // depends on the audio format,
.audio_output_buffer = SC_TICK_FROM_MS(5),
.time_limit = 0,
#ifdef HAVE_V4L2

View File

@@ -25,6 +25,8 @@ enum sc_record_format {
SC_RECORD_FORMAT_MKA,
SC_RECORD_FORMAT_OPUS,
SC_RECORD_FORMAT_AAC,
SC_RECORD_FORMAT_FLAC,
SC_RECORD_FORMAT_WAV,
};
static inline bool
@@ -32,7 +34,9 @@ sc_record_format_is_audio_only(enum sc_record_format fmt) {
return fmt == SC_RECORD_FORMAT_M4A
|| fmt == SC_RECORD_FORMAT_MKA
|| fmt == SC_RECORD_FORMAT_OPUS
|| fmt == SC_RECORD_FORMAT_AAC;
|| fmt == SC_RECORD_FORMAT_AAC
|| fmt == SC_RECORD_FORMAT_FLAC
|| fmt == SC_RECORD_FORMAT_WAV;
}
enum sc_codec {
@@ -41,6 +45,7 @@ enum sc_codec {
SC_CODEC_AV1,
SC_CODEC_OPUS,
SC_CODEC_AAC,
SC_CODEC_FLAC,
SC_CODEC_RAW,
};

View File

@@ -69,6 +69,10 @@ sc_recorder_get_format_name(enum sc_record_format format) {
return "matroska";
case SC_RECORD_FORMAT_OPUS:
return "opus";
case SC_RECORD_FORMAT_FLAC:
return "flac";
case SC_RECORD_FORMAT_WAV:
return "wav";
default:
return NULL;
}
@@ -101,7 +105,7 @@ sc_recorder_write_stream(struct sc_recorder *recorder,
AVStream *stream = recorder->ctx->streams[st->index];
sc_recorder_rescale_packet(stream, packet);
if (st->last_pts != AV_NOPTS_VALUE && packet->pts <= st->last_pts) {
LOGW("Fixing PTS non monotonically increasing in stream %d "
LOGD("Fixing PTS non monotonically increasing in stream %d "
"(%" PRIi64 " >= %" PRIi64 ")",
st->index, st->last_pts, packet->pts);
packet->pts = ++st->last_pts;
@@ -166,13 +170,14 @@ sc_recorder_close_output_file(struct sc_recorder *recorder) {
}
static inline bool
sc_recorder_has_empty_queues(struct sc_recorder *recorder) {
sc_recorder_must_wait_for_config_packets(struct sc_recorder *recorder) {
if (recorder->video && sc_vecdeque_is_empty(&recorder->video_queue)) {
// The video queue is empty
return true;
}
if (recorder->audio && sc_vecdeque_is_empty(&recorder->audio_queue)) {
if (recorder->audio && recorder->audio_expects_config_packet
&& sc_vecdeque_is_empty(&recorder->audio_queue)) {
// The audio queue is empty (when audio is enabled)
return true;
}
@@ -188,7 +193,7 @@ sc_recorder_process_header(struct sc_recorder *recorder) {
while (!recorder->stopped &&
((recorder->video && !recorder->video_init)
|| (recorder->audio && !recorder->audio_init)
|| sc_recorder_has_empty_queues(recorder))) {
|| sc_recorder_must_wait_for_config_packets(recorder))) {
sc_cond_wait(&recorder->cond, &recorder->mutex);
}
@@ -207,7 +212,8 @@ sc_recorder_process_header(struct sc_recorder *recorder) {
}
AVPacket *audio_pkt = NULL;
if (!sc_vecdeque_is_empty(&recorder->audio_queue)) {
if (recorder->audio_expects_config_packet &&
!sc_vecdeque_is_empty(&recorder->audio_queue)) {
assert(recorder->audio);
audio_pkt = sc_vecdeque_pop(&recorder->audio_queue);
}
@@ -595,6 +601,10 @@ sc_recorder_audio_packet_sink_open(struct sc_packet_sink *sink,
recorder->audio_stream.index = stream->index;
// A config packet is provided for all formats supported except raw audio
recorder->audio_expects_config_packet =
ctx->codec_id != AV_CODEC_ID_PCM_S16LE;
recorder->audio_init = true;
sc_cond_signal(&recorder->cond);
sc_mutex_unlock(&recorder->mutex);
@@ -707,6 +717,8 @@ sc_recorder_init(struct sc_recorder *recorder, const char *filename,
recorder->video_init = false;
recorder->audio_init = false;
recorder->audio_expects_config_packet = false;
sc_recorder_stream_init(&recorder->video_stream);
sc_recorder_stream_init(&recorder->audio_stream);

View File

@@ -50,6 +50,8 @@ struct sc_recorder {
bool video_init;
bool audio_init;
bool audio_expects_config_packet;
struct sc_recorder_stream video_stream;
struct sc_recorder_stream audio_stream;

View File

@@ -417,10 +417,14 @@ scrcpy(struct scrcpy_options *options) {
if (options->video_playback) {
sdl_set_hints(options->render_driver);
if (SDL_Init(SDL_INIT_VIDEO)) {
LOGE("Could not initialize SDL video: %s", SDL_GetError());
goto end;
}
}
// Initialize the video subsystem even if --no-video or --no-video-playback
// is passed so that clipboard synchronization still works.
// <https://github.com/Genymobile/scrcpy/issues/4418>
if (SDL_Init(SDL_INIT_VIDEO)) {
LOGE("Could not initialize SDL video: %s", SDL_GetError());
goto end;
}
if (options->audio_playback) {

View File

@@ -178,6 +178,8 @@ sc_server_get_codec_name(enum sc_codec codec) {
return "opus";
case SC_CODEC_AAC:
return "aac";
case SC_CODEC_FLAC:
return "flac";
case SC_CODEC_RAW:
return "raw";
default:

View File

@@ -16,6 +16,6 @@ cpu = 'i686'
endian = 'little'
[properties]
prebuilt_ffmpeg = 'ffmpeg-6.0-scrcpy-4/win32'
prebuilt_ffmpeg = 'ffmpeg-6.1-scrcpy-2/win32'
prebuilt_sdl2 = 'SDL2-2.28.4/i686-w64-mingw32'
prebuilt_libusb = 'libusb-1.0.26/libusb-MinGW-Win32'

View File

@@ -16,6 +16,6 @@ cpu = 'x86_64'
endian = 'little'
[properties]
prebuilt_ffmpeg = 'ffmpeg-6.0-scrcpy-4/win64'
prebuilt_ffmpeg = 'ffmpeg-6.1-scrcpy-2/win64'
prebuilt_sdl2 = 'SDL2-2.28.4/x86_64-w64-mingw32'
prebuilt_libusb = 'libusb-1.0.26/libusb-MinGW-x64'

View File

@@ -62,12 +62,13 @@ scrcpy --audio-source=mic --no-video --no-playback --record=file.opus
## Codec
The audio codec can be selected. The possible values are `opus` (default), `aac`
and `raw` (uncompressed PCM 16-bit LE):
The audio codec can be selected. The possible values are `opus` (default),
`aac`, `flac` and `raw` (uncompressed PCM 16-bit LE):
```bash
scrcpy --audio-codec=opus # default
scrcpy --audio-codec=aac
scrcpy --audio-codec=flac
scrcpy --audio-codec=raw
```
@@ -80,7 +81,14 @@ then your device has no Opus encoder: try `scrcpy --audio-codec=aac`.
For advanced usage, to pass arbitrary parameters to the [`MediaFormat`],
check `--audio-codec-options` in the manpage or in `scrcpy --help`.
For example, to change the [FLAC compression level]:
```bash
scrcpy --audio-codec=flac --audio-codec-options=flac-compression-level=8
```
[`MediaFormat`]: https://developer.android.com/reference/android/media/MediaFormat
[FLAC compression level]: https://developer.android.com/reference/android/media/MediaFormat#KEY_FLAC_COMPRESSION_LEVEL
## Encoder

View File

@@ -18,7 +18,9 @@ To record only the audio:
```bash
scrcpy --no-video --record=file.opus
scrcpy --no-video --audio-codec=aac --record=file.aac
# .m4a/.mp4 and .mka/.mkv are also supported for both opus and aac
scrcpy --no-video --audio-codec=flac --record=file.flac
scrcpy --no-video --audio-codec=raw --record=file.wav
# .m4a/.mp4 and .mka/.mkv are also supported for opus, aac and flac
```
Timestamps are captured on the device, so [packet delay variation] does not
@@ -31,14 +33,17 @@ course, not if you capture your scrcpy window and audio output on the computer).
## Format
The video and audio streams are encoded on the device, but are muxed on the
client side. Two formats (containers) are supported:
- Matroska (`.mkv`)
- MP4 (`.mp4`)
client side. Several formats (containers) are supported:
- MP4 (`.mp4`, `.m4a`, `.aac`)
- Matroska (`.mkv`, `.mka`)
- OPUS (`.opus`)
- FLAC (`.flac`)
- WAV (`.wav`)
The container is automatically selected based on the filename.
It is also possible to explicitly select a container (in that case the filename
needs not end with `.mkv` or `.mp4`):
needs not end with a known extension):
```
scrcpy --record=file --record-format=mkv

View File

@@ -94,10 +94,10 @@ dist-win32: build-server build-win32
cp app/data/scrcpy-noconsole.vbs "$(DIST)/$(WIN32_TARGET_DIR)"
cp app/data/icon.png "$(DIST)/$(WIN32_TARGET_DIR)"
cp app/data/open_a_terminal_here.bat "$(DIST)/$(WIN32_TARGET_DIR)"
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-4/win32/bin/avutil-58.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-4/win32/bin/avcodec-60.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-4/win32/bin/avformat-60.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-4/win32/bin/swresample-4.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.1-scrcpy-2/win32/bin/avutil-58.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.1-scrcpy-2/win32/bin/avcodec-60.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.1-scrcpy-2/win32/bin/avformat-60.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.1-scrcpy-2/win32/bin/swresample-4.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/platform-tools-34.0.5/adb.exe "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/platform-tools-34.0.5/AdbWinApi.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
cp app/prebuilt-deps/data/platform-tools-34.0.5/AdbWinUsbApi.dll "$(DIST)/$(WIN32_TARGET_DIR)/"
@@ -112,10 +112,10 @@ dist-win64: build-server build-win64
cp app/data/scrcpy-noconsole.vbs "$(DIST)/$(WIN64_TARGET_DIR)"
cp app/data/icon.png "$(DIST)/$(WIN64_TARGET_DIR)"
cp app/data/open_a_terminal_here.bat "$(DIST)/$(WIN64_TARGET_DIR)"
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-4/win64/bin/avutil-58.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-4/win64/bin/avcodec-60.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-4/win64/bin/avformat-60.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.0-scrcpy-4/win64/bin/swresample-4.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.1-scrcpy-2/win64/bin/avutil-58.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.1-scrcpy-2/win64/bin/avcodec-60.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.1-scrcpy-2/win64/bin/avformat-60.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/ffmpeg-6.1-scrcpy-2/win64/bin/swresample-4.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/platform-tools-34.0.5/adb.exe "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/platform-tools-34.0.5/AdbWinApi.dll "$(DIST)/$(WIN64_TARGET_DIR)/"
cp app/prebuilt-deps/data/platform-tools-34.0.5/AdbWinUsbApi.dll "$(DIST)/$(WIN64_TARGET_DIR)/"

View File

@@ -24,11 +24,19 @@ public final class AudioCapture {
public static final int ENCODING = AudioFormat.ENCODING_PCM_16BIT;
public static final int BYTES_PER_SAMPLE = 2;
// Never read more than 1024 samples, even if the buffer is bigger (that would increase latency).
// A lower value is useless, since the system captures audio samples by blocks of 1024 (so for example if we read by blocks of 256 samples, we
// receive 4 successive blocks without waiting, then we wait for the 4 next ones).
public static final int MAX_READ_SIZE = 1024 * CHANNELS * BYTES_PER_SAMPLE;
private static final long ONE_SAMPLE_US = (1000000 + SAMPLE_RATE - 1) / SAMPLE_RATE; // 1 sample in microseconds (used for fixing PTS)
private final int audioSource;
private AudioRecord recorder;
private final AudioTimestamp timestamp = new AudioTimestamp();
private long previousRecorderTimestamp = -1;
private long previousPts = 0;
private long nextPts = 0;
@@ -36,10 +44,6 @@ public final class AudioCapture {
this.audioSource = audioSource.value();
}
public static int millisToBytes(int millis) {
return SAMPLE_RATE * CHANNELS * BYTES_PER_SAMPLE * millis / 1000;
}
private static AudioFormat createAudioFormat() {
AudioFormat.Builder builder = new AudioFormat.Builder();
builder.setEncoding(ENCODING);
@@ -135,8 +139,8 @@ public final class AudioCapture {
}
@TargetApi(Build.VERSION_CODES.N)
public int read(ByteBuffer directBuffer, int size, MediaCodec.BufferInfo outBufferInfo) {
int r = recorder.read(directBuffer, size);
public int read(ByteBuffer directBuffer, MediaCodec.BufferInfo outBufferInfo) {
int r = recorder.read(directBuffer, MAX_READ_SIZE);
if (r <= 0) {
return r;
}
@@ -144,8 +148,9 @@ public final class AudioCapture {
long pts;
int ret = recorder.getTimestamp(timestamp, AudioTimestamp.TIMEBASE_MONOTONIC);
if (ret == AudioRecord.SUCCESS) {
if (ret == AudioRecord.SUCCESS && timestamp.nanoTime != previousRecorderTimestamp) {
pts = timestamp.nanoTime / 1000;
previousRecorderTimestamp = timestamp.nanoTime;
} else {
if (nextPts == 0) {
Ln.w("Could not get any audio timestamp");
@@ -157,13 +162,13 @@ public final class AudioCapture {
long durationUs = r * 1000000 / (CHANNELS * BYTES_PER_SAMPLE * SAMPLE_RATE);
nextPts = pts + durationUs;
if (previousPts != 0 && pts < previousPts) {
if (previousPts != 0 && pts < previousPts + ONE_SAMPLE_US) {
// Audio PTS may come from two sources:
// - recorder.getTimestamp() if the call works;
// - an estimation from the previous PTS and the packet size as a fallback.
//
// Therefore, the property that PTS are monotonically increasing is no guaranteed in corner cases, so enforce it.
pts = previousPts + 1;
pts = previousPts + ONE_SAMPLE_US;
}
previousPts = pts;

View File

@@ -5,6 +5,7 @@ import android.media.MediaFormat;
public enum AudioCodec implements Codec {
OPUS(0x6f_70_75_73, "opus", MediaFormat.MIMETYPE_AUDIO_OPUS),
AAC(0x00_61_61_63, "aac", MediaFormat.MIMETYPE_AUDIO_AAC),
FLAC(0x66_6c_61_63, "flac", MediaFormat.MIMETYPE_AUDIO_FLAC),
RAW(0x00_72_61_77, "raw", MediaFormat.MIMETYPE_AUDIO_RAW);
private final int id; // 4-byte ASCII representation of the name

View File

@@ -37,9 +37,6 @@ public final class AudioEncoder implements AsyncProcessor {
private static final int SAMPLE_RATE = AudioCapture.SAMPLE_RATE;
private static final int CHANNELS = AudioCapture.CHANNELS;
private static final int READ_MS = 5; // milliseconds
private static final int READ_SIZE = AudioCapture.millisToBytes(READ_MS);
private final AudioCapture capture;
private final Streamer streamer;
private final int bitRate;
@@ -93,7 +90,7 @@ public final class AudioEncoder implements AsyncProcessor {
while (!Thread.currentThread().isInterrupted()) {
InputTask task = inputTasks.take();
ByteBuffer buffer = mediaCodec.getInputBuffer(task.index);
int r = capture.read(buffer, READ_SIZE, bufferInfo);
int r = capture.read(buffer, bufferInfo);
if (r <= 0) {
throw new IOException("Could not read audio: " + r);
}

View File

@@ -13,9 +13,6 @@ public final class AudioRawRecorder implements AsyncProcessor {
private Thread thread;
private static final int READ_MS = 5; // milliseconds
private static final int READ_SIZE = AudioCapture.millisToBytes(READ_MS);
public AudioRawRecorder(AudioCapture capture, Streamer streamer) {
this.capture = capture;
this.streamer = streamer;
@@ -28,16 +25,22 @@ public final class AudioRawRecorder implements AsyncProcessor {
return;
}
final ByteBuffer buffer = ByteBuffer.allocateDirect(READ_SIZE);
final ByteBuffer buffer = ByteBuffer.allocateDirect(AudioCapture.MAX_READ_SIZE);
final MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
try {
capture.start();
try {
capture.start();
} catch (Throwable t) {
// Notify the client that the audio could not be captured
streamer.writeDisableStream(false);
throw t;
}
streamer.writeAudioHeader();
while (!Thread.currentThread().isInterrupted()) {
buffer.position(0);
int r = capture.read(buffer, READ_SIZE, bufferInfo);
int r = capture.read(buffer, bufferInfo);
if (r < 0) {
throw new IOException("Could not read audio: " + r);
}
@@ -45,10 +48,11 @@ public final class AudioRawRecorder implements AsyncProcessor {
streamer.writePacket(buffer, bufferInfo);
}
} catch (Throwable e) {
// Notify the client that the audio could not be captured
streamer.writeDisableStream(false);
throw e;
} catch (IOException e) {
// Broken pipe is expected on close, because the socket is closed by the client
if (!IO.isBrokenPipe(e)) {
Ln.e("Audio capture error", e);
}
} finally {
capture.stop();
}
@@ -62,8 +66,8 @@ public final class AudioRawRecorder implements AsyncProcessor {
record();
} catch (AudioCaptureForegroundException e) {
// Do not print stack trace, a user-friendly error-message has already been logged
} catch (IOException e) {
Ln.e("Audio recording error", e);
} catch (Throwable t) {
Ln.e("Audio recording error", t);
fatalError = true;
} finally {
Ln.d("Audio recorder stopped");

View File

@@ -2,11 +2,12 @@ package com.genymobile.scrcpy;
import android.annotation.TargetApi;
import android.content.AttributionSource;
import android.content.MutableContextWrapper;
import android.content.Context;
import android.content.ContextWrapper;
import android.os.Build;
import android.os.Process;
public final class FakeContext extends MutableContextWrapper {
public final class FakeContext extends ContextWrapper {
public static final String PACKAGE_NAME = "com.android.shell";
public static final int ROOT_UID = 0; // Like android.os.Process.ROOT_UID, but before API 29
@@ -18,7 +19,7 @@ public final class FakeContext extends MutableContextWrapper {
}
private FakeContext() {
super(null);
super(Workarounds.getSystemContext());
}
@Override
@@ -44,4 +45,9 @@ public final class FakeContext extends MutableContextWrapper {
public int getDeviceId() {
return 0;
}
@Override
public Context getApplicationContext() {
return this;
}
}

View File

@@ -93,19 +93,26 @@ public final class LogUtils {
builder.append("\n (none)");
} else {
for (String id : cameraIds) {
builder.append("\n --video-source=camera --camera-id=").append(id);
builder.append("\n --camera-id=").append(id);
CameraCharacteristics characteristics = cameraManager.getCameraCharacteristics(id);
int facing = characteristics.get(CameraCharacteristics.LENS_FACING);
builder.append(" (").append(getCameraFacingName(facing)).append(", ");
Rect activeSize = characteristics.get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE);
builder.append(activeSize.width()).append("x").append(activeSize.height()).append(", ");
builder.append(activeSize.width()).append("x").append(activeSize.height());
// Capture frame rates for low-FPS mode are the same for every resolution
Range<Integer>[] lowFpsRanges = characteristics.get(CameraCharacteristics.CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES);
SortedSet<Integer> uniqueLowFps = getUniqueSet(lowFpsRanges);
builder.append("fps=").append(uniqueLowFps).append(')');
try {
// Capture frame rates for low-FPS mode are the same for every resolution
Range<Integer>[] lowFpsRanges = characteristics.get(CameraCharacteristics.CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES);
SortedSet<Integer> uniqueLowFps = getUniqueSet(lowFpsRanges);
builder.append(", fps=").append(uniqueLowFps);
} catch (Exception e) {
// Some devices may provide invalid ranges, causing an IllegalArgumentException "lower must be less than or equal to upper"
Ln.w("Could not get available frame rates for camera " + id, e);
}
builder.append(')');
if (includeSizes) {
StreamConfigurationMap configs = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);

View File

@@ -5,14 +5,14 @@ import android.media.MediaCodec;
import java.io.FileDescriptor;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.util.Arrays;
public final class Streamer {
private static final long PACKET_FLAG_CONFIG = 1L << 63;
private static final long PACKET_FLAG_KEY_FRAME = 1L << 62;
private static final long AOPUSHDR = 0x5244485355504F41L; // "AOPUSHDR" in ASCII (little-endian)
private final FileDescriptor fd;
private final Codec codec;
private final boolean sendCodecMeta;
@@ -30,6 +30,7 @@ public final class Streamer {
public Codec getCodec() {
return codec;
}
public void writeAudioHeader() throws IOException {
if (sendCodecMeta) {
ByteBuffer buffer = ByteBuffer.allocate(4);
@@ -62,8 +63,12 @@ public final class Streamer {
}
public void writePacket(ByteBuffer buffer, long pts, boolean config, boolean keyFrame) throws IOException {
if (config && codec == AudioCodec.OPUS) {
fixOpusConfigPacket(buffer);
if (config) {
if (codec == AudioCodec.OPUS) {
fixOpusConfigPacket(buffer);
} else if (codec == AudioCodec.FLAC) {
fixFlacConfigPacket(buffer);
}
}
if (sendFrameMeta) {
@@ -120,11 +125,14 @@ public final class Streamer {
throw new IOException("Not enough data in OPUS config packet");
}
long id = buffer.getLong();
if (id != AOPUSHDR) {
final byte[] opusHeaderId = {'A', 'O', 'P', 'U', 'S', 'H', 'D', 'R'};
byte[] idBuffer = new byte[8];
buffer.get(idBuffer);
if (!Arrays.equals(idBuffer, opusHeaderId)) {
throw new IOException("OPUS header not found");
}
// The size is in native byte-order
long sizeLong = buffer.getLong();
if (sizeLong < 0 || sizeLong >= 0x7FFFFFFF) {
throw new IOException("Invalid block size in OPUS header: " + sizeLong);
@@ -138,4 +146,41 @@ public final class Streamer {
// Set the buffer to point to the OPUS header slice
buffer.limit(buffer.position() + size);
}
private static void fixFlacConfigPacket(ByteBuffer buffer) throws IOException {
// 00000000 66 4c 61 43 00 00 00 22 |fLaC..." |
// -------------- BELOW IS THE PART WE MUST PUT AS EXTRADATA -------------------
// 00000000 10 00 10 00 00 00 00 00 | ........|
// 00000010 00 00 0b b8 02 f0 00 00 00 00 00 00 00 00 00 00 |................|
// 00000020 00 00 00 00 00 00 00 00 00 00 |.......... |
// ------------------------------------------------------------------------------
// 00000020 84 00 00 28 20 00 | ...( .|
// 00000030 00 00 72 65 66 65 72 65 6e 63 65 20 6c 69 62 46 |..reference libF|
// 00000040 4c 41 43 20 31 2e 33 2e 32 20 32 30 32 32 31 30 |LAC 1.3.2 202210|
// 00000050 32 32 00 00 00 00 |22....|
//
// <https://developer.android.com/reference/android/media/MediaCodec#CSD>
if (buffer.remaining() < 8) {
throw new IOException("Not enough data in FLAC config packet");
}
final byte[] flacHeaderId = {'f', 'L', 'a', 'C'};
byte[] idBuffer = new byte[4];
buffer.get(idBuffer);
if (!Arrays.equals(idBuffer, flacHeaderId)) {
throw new IOException("FLAC header not found");
}
// The size is in big-endian
buffer.order(ByteOrder.BIG_ENDIAN);
int size = buffer.getInt();
if (buffer.remaining() < size) {
throw new IOException("Not enough data in FLAC header (invalid size: " + size + ")");
}
// Set the buffer to point to the FLAC header slice
buffer.limit(buffer.position() + size);
}
}

View File

@@ -21,18 +21,34 @@ import java.lang.reflect.Method;
public final class Workarounds {
private static Class<?> activityThreadClass;
private static Object activityThread;
private static final Class<?> ACTIVITY_THREAD_CLASS;
private static final Object ACTIVITY_THREAD;
static {
prepareMainLooper();
try {
// ActivityThread activityThread = new ActivityThread();
ACTIVITY_THREAD_CLASS = Class.forName("android.app.ActivityThread");
Constructor<?> activityThreadConstructor = ACTIVITY_THREAD_CLASS.getDeclaredConstructor();
activityThreadConstructor.setAccessible(true);
ACTIVITY_THREAD = activityThreadConstructor.newInstance();
// ActivityThread.sCurrentActivityThread = activityThread;
Field sCurrentActivityThreadField = ACTIVITY_THREAD_CLASS.getDeclaredField("sCurrentActivityThread");
sCurrentActivityThreadField.setAccessible(true);
sCurrentActivityThreadField.set(null, ACTIVITY_THREAD);
} catch (Exception e) {
throw new AssertionError(e);
}
}
private Workarounds() {
// not instantiable
}
public static void apply(boolean audio, boolean camera) {
Workarounds.prepareMainLooper();
boolean mustFillAppInfo = false;
boolean mustFillBaseContext = false;
boolean mustFillAppContext = false;
if (Build.BRAND.equalsIgnoreCase("meizu")) {
@@ -53,7 +69,6 @@ public final class Workarounds {
// - <https://github.com/Genymobile/scrcpy/issues/4015#issuecomment-1595382142>
// - <https://github.com/Genymobile/scrcpy/issues/3805#issuecomment-1596148031>
mustFillAppInfo = true;
mustFillBaseContext = true;
mustFillAppContext = true;
}
@@ -66,15 +81,12 @@ public final class Workarounds {
if (camera) {
mustFillAppInfo = true;
mustFillBaseContext = true;
mustFillAppContext = true;
}
if (mustFillAppInfo) {
Workarounds.fillAppInfo();
}
if (mustFillBaseContext) {
Workarounds.fillBaseContext();
}
if (mustFillAppContext) {
Workarounds.fillAppContext();
}
@@ -93,27 +105,9 @@ public final class Workarounds {
Looper.prepareMainLooper();
}
@SuppressLint("PrivateApi,DiscouragedPrivateApi")
private static void fillActivityThread() throws Exception {
if (activityThread == null) {
// ActivityThread activityThread = new ActivityThread();
activityThreadClass = Class.forName("android.app.ActivityThread");
Constructor<?> activityThreadConstructor = activityThreadClass.getDeclaredConstructor();
activityThreadConstructor.setAccessible(true);
activityThread = activityThreadConstructor.newInstance();
// ActivityThread.sCurrentActivityThread = activityThread;
Field sCurrentActivityThreadField = activityThreadClass.getDeclaredField("sCurrentActivityThread");
sCurrentActivityThreadField.setAccessible(true);
sCurrentActivityThreadField.set(null, activityThread);
}
}
@SuppressLint("PrivateApi,DiscouragedPrivateApi")
private static void fillAppInfo() {
try {
fillActivityThread();
// ActivityThread.AppBindData appBindData = new ActivityThread.AppBindData();
Class<?> appBindDataClass = Class.forName("android.app.ActivityThread$AppBindData");
Constructor<?> appBindDataConstructor = appBindDataClass.getDeclaredConstructor();
@@ -129,9 +123,9 @@ public final class Workarounds {
appInfoField.set(appBindData, applicationInfo);
// activityThread.mBoundApplication = appBindData;
Field mBoundApplicationField = activityThreadClass.getDeclaredField("mBoundApplication");
Field mBoundApplicationField = ACTIVITY_THREAD_CLASS.getDeclaredField("mBoundApplication");
mBoundApplicationField.setAccessible(true);
mBoundApplicationField.set(activityThread, appBindData);
mBoundApplicationField.set(ACTIVITY_THREAD, appBindData);
} catch (Throwable throwable) {
// this is a workaround, so failing is not an error
Ln.d("Could not fill app info: " + throwable.getMessage());
@@ -141,33 +135,29 @@ public final class Workarounds {
@SuppressLint("PrivateApi,DiscouragedPrivateApi")
private static void fillAppContext() {
try {
fillActivityThread();
Application app = Application.class.newInstance();
Application app = new Application();
Field baseField = ContextWrapper.class.getDeclaredField("mBase");
baseField.setAccessible(true);
baseField.set(app, FakeContext.get());
// activityThread.mInitialApplication = app;
Field mInitialApplicationField = activityThreadClass.getDeclaredField("mInitialApplication");
Field mInitialApplicationField = ACTIVITY_THREAD_CLASS.getDeclaredField("mInitialApplication");
mInitialApplicationField.setAccessible(true);
mInitialApplicationField.set(activityThread, app);
mInitialApplicationField.set(ACTIVITY_THREAD, app);
} catch (Throwable throwable) {
// this is a workaround, so failing is not an error
Ln.d("Could not fill app context: " + throwable.getMessage());
}
}
private static void fillBaseContext() {
static Context getSystemContext() {
try {
fillActivityThread();
Method getSystemContextMethod = activityThreadClass.getDeclaredMethod("getSystemContext");
Context context = (Context) getSystemContextMethod.invoke(activityThread);
FakeContext.get().setBaseContext(context);
Method getSystemContextMethod = ACTIVITY_THREAD_CLASS.getDeclaredMethod("getSystemContext");
return (Context) getSystemContextMethod.invoke(ACTIVITY_THREAD);
} catch (Throwable throwable) {
// this is a workaround, so failing is not an error
Ln.d("Could not fill base context: " + throwable.getMessage());
Ln.d("Could not get system context: " + throwable.getMessage());
return null;
}
}