Merge branches 'misc', 'xa2-ds' and 'info' into feature

This commit is contained in:
arch1t3cht 2022-08-19 02:31:59 +02:00
commit 3b8cc6deb1
15 changed files with 378 additions and 41 deletions

View file

@ -19,6 +19,7 @@ Being a collection of different feature additions, this repository consists of a
The `cibuilds` branch makes some CI builds of snapshots of `feature` at relevant points in time.
### Branch/Feature list
This list is for navigating the repository. Go to the [release page](https://github.com/arch1t3cht/Aegisub/releases) for a more structured changelog.
- [`folding`](https://github.com/arch1t3cht/Aegisub/tree/folding): Add the ability to visually group and collapse lines in the subtitle grid
- [`lua_api`](https://github.com/arch1t3cht/Aegisub/tree/lua_api): Add new functions to the Lua automation API, like controlling the selection or cursor in the text edit box
- [`vector_clip_actions`](https://github.com/arch1t3cht/Aegisub/tree/vector_clip_actions): Make the different modes of the vector clip tool (lines, bezier curves, adding points, etc) bindable to hotkeys
@ -28,13 +29,20 @@ The `cibuilds` branch makes some CI builds of snapshots of `feature` at relevant
- [`vapoursynth`](https://github.com/arch1t3cht/Aegisub/tree/vapoursynth): Add Vapoursynth audio and video source
- [`bugfixes`](https://github.com/arch1t3cht/Aegisub/tree/bugfixes): Various fixes necessary for compilation. Most branches are based on this.
- [`fixes`](https://github.com/arch1t3cht/Aegisub/tree/fixes): Miscellaneous bugfixes
- [`misc`](https://github.com/arch1t3cht/Aegisub/tree/misc): Other miscellaneous additions
- [`misc_dc`](https://github.com/arch1t3cht/Aegisub/tree/misc_dc): Miscellaneous changes taken from AegisubDC
- [`xa-ds2`](https://github.com/arch1t3cht/Aegisub/tree/xa-ds2): Add XAudio2 backend and allow stereo playback for some other backends, by wangqr and Shinon.
- [`stereo`](https://github.com/arch1t3cht/Aegisub/tree/stereo): Add multi-channel support for the other audio backends where possible.
- [`video_panning_feature`](https://github.com/arch1t3cht/Aegisub/tree/video_panning_feature): Merge [moex3's video zoom and panning](https://github.com/TypesettingTools/Aegisub/pull/150), with an OSX fix and more options to control zoom behavior
- [`spectrum-frequency-mapping`](https://github.com/arch1t3cht/Aegisub/tree/spectrum-frequency-mapping): Merge EleonoreMizo's [spectrum display improvements](https://github.com/TypesettingTools/Aegisub/pull/94), and also make Shift+Scroll vertically zoom the audio display
- [`wangqr_time_video`](https://github.com/arch1t3cht/Aegisub/tree/wangqr_time_video): Merge wangqr's feature adding a tool for timing subtitles to changes in the video
### Troubleshooting
I'll gladly take any bug reports, but if you encounter an issue, please check first if it occurs only on my fork, or also on [earlier TSTools builds](https://github.com/TypesettingTools/Aegisub/actions).
If it wasn't introduced by my fork, I can still take a look, but I can't promise anything.
You can find me for support on various servers, including the cave and the TSTools server linked below.
#### Building fails with a "CMake sandbox violation"
This is an upstream bug in meson. For now, you need to downgrade meson using `pip install meson==0.62.2`.
@ -55,6 +63,37 @@ If it's not because of this particular bug, you can also try an alternative vide
If you're compiling yourself, try adding `--force-fallback-for=zlib` to the meson options.
### Compilation
For compilation on Windows, see the TSTools documentation below. Also check the [GitHub workflow](https://github.com/arch1t3cht/Aegisub/blob/cibuilds/.github/workflows/ci.yml) for the project arguments.
On Linux, you can use the [TSTools PKGBUILD](https://aur.archlinux.org/packages/aegisub-ttools-meson-git) as a base, in particular for installing the necessary dependencies if you don't want to compile them yourself.
To compile manually,
- Install Meson (at the moment, you'll need to downgrade Meson below 0.63.0: `pip install meson==0.62.2`)
- Clone the repository
- In the repository, run `meson setup build` for the default configuration. See below for further options.
- `cd` to the `build` directory and run `ninja`
- You'll get an `aegisub` binary in the `build` folder. To install it to a system-wide location, run `ninja install`. To install to `/usr` instead of `/usr/local`, pass `--prefix=/usr` when configuring or reconfiguring meson.
- When recompiling after pulling new commits, skip the `meson setup` setup and just immediately run `ninja` from the build directory - even when the build configuration changed.
#### Compilation flags
Some features are not enabled by default. To enable them, pass `-D<feature>=enabled` with the `meson setup` command:
- `-Davisynth=enabled`: Avisynth support
- `-Dbestsource=enabled`: BestSource
- `-Dvapoursynth=enabled`: Vapoursynth support
You can also disable options that are active by default in the same way. Check the file `meson_options.txt` for all options.
To change the options of an existing build directory, run `meson setup --reconfigure <new arguments>` from inside the `build` directory.
### Dependencies
Apart from the dependencies for the TSTools version, there are some additional dependencies. These are cloned and compiled from scratch if not found, but you might want to install binaries instead:
- `jansson`: For BestSource
- `ffmpeg`: Becomes a direct dependency when compiling with BestSource
- `avisynth` (or `avisynthplus`): Optional run-time dependency for the Avisynth source
- `vapoursynth`: Optional run-time dependency for the VapourSynth source
# Aegisub
For binaries and general information [see the homepage](http://www.aegisub.org).

View file

@ -119,6 +119,47 @@ void AudioProvider::FillBufferInt16Mono(int16_t* buf, int64_t start, int64_t cou
free(buff);
}
// This entire file has turned into a mess. For now I'm just following the pattern of the wangqr code, but
// this should really be restructured entirely again. The original type constructor-based system worked very well - it could
// just give downmix/conversion control to the players instead.
void AudioProvider::GetAudioWithVolume(void *buf, int64_t start, int64_t count, double volume) const {
GetAudio(buf, start, count);
if (volume == 1.0) return;
int64_t n = count * GetChannels();
if (float_samples) {
if (bytes_per_sample == sizeof(float)) {
float *buff = reinterpret_cast<float *>(buf);
for (int64_t i = 0; i < n; ++i)
buff[i] = static_cast<float>(buff[i] * volume);
} else if (bytes_per_sample == sizeof(double)) {
double *buff = reinterpret_cast<double *>(buf);
for (int64_t i = 0; i < n; ++i)
buff[i] = buff[i] * volume;
}
}
else {
if (bytes_per_sample == sizeof(uint8_t)) {
uint8_t *buff = reinterpret_cast<uint8_t *>(buf);
for (int64_t i = 0; i < n; ++i)
buff[i] = util::mid(0, static_cast<int>(((int) buff[i] - 128) * volume + 128), 0xFF);
} else if (bytes_per_sample == sizeof(int16_t)) {
int16_t *buff = reinterpret_cast<int16_t *>(buf);
for (int64_t i = 0; i < n; ++i)
buff[i] = util::mid(-0x8000, static_cast<int>(buff[i] * volume), 0x7FFF);
} else if (bytes_per_sample == sizeof(int32_t)) {
int32_t *buff = reinterpret_cast<int32_t *>(buf);
for (int64_t i = 0; i < n; ++i)
buff[i] = static_cast<int32_t>(buff[i] * volume);
} else if (bytes_per_sample == sizeof(int64_t)) {
int64_t *buff = reinterpret_cast<int64_t *>(buf);
for (int64_t i = 0; i < n; ++i)
buff[i] = static_cast<int64_t>(buff[i] * volume);
}
}
}
void AudioProvider::GetInt16MonoAudioWithVolume(int16_t *buf, int64_t start, int64_t count, double volume) const {
GetInt16MonoAudio(buf, start, count);
if (volume == 1.0) return;
@ -261,4 +302,4 @@ void SaveAudioClip(AudioProvider const& provider, fs::path const& path, int star
out.write(buf);
}
}
}
}

View file

@ -45,6 +45,7 @@ public:
virtual ~AudioProvider() = default;
void GetAudio(void *buf, int64_t start, int64_t count) const;
void GetAudioWithVolume(void *buf, int64_t start, int64_t count, double volume) const;
void GetInt16MonoAudio(int16_t* buf, int64_t start, int64_t count) const;
void GetInt16MonoAudioWithVolume(int16_t *buf, int64_t start, int64_t count, double volume) const;

View file

@ -25,6 +25,12 @@
#include <libaegisub/dispatch.h>
#if BOOST_VERSION >= 106900
#include <boost/gil.hpp>
#else
#include <boost/gil/gil_all.hpp>
#endif
enum {
NEW_SUBS_FILE = -1,
SUBS_FILE_ALREADY_LOADED = -2
@ -81,6 +87,55 @@ std::shared_ptr<VideoFrame> AsyncVideoProvider::ProcFrame(int frame_number, doub
return frame;
}
VideoFrame AsyncVideoProvider::GetBlankFrame(bool white) {
VideoFrame result;
result.width = GetWidth();
result.height = GetHeight();
result.pitch = result.width * 4;
result.flipped = false;
result.data.resize(result.pitch * result.height, white ? 255 : 0);
return result;
}
VideoFrame AsyncVideoProvider::GetSubtitles(double time) {
// We want to combine all transparent subtitle layers onto one layer.
// Instead of alpha blending them all together, which can be messy and cause
// rounding errors, we draw them once on a black frame and once on a white frame,
// and solve for the color and alpha. This has the benefit of being independent
// of the subtitle provider, as long as the provider works by alpha blending.
VideoFrame frame_black = GetBlankFrame(false);
if (!subs) return frame_black;
VideoFrame frame_white = GetBlankFrame(true);
subs_provider->LoadSubtitles(subs.get());
subs_provider->DrawSubtitles(frame_black, time / 1000.);
subs_provider->DrawSubtitles(frame_white, time / 1000.);
using namespace boost::gil;
auto blackview = interleaved_view(frame_black.width, frame_black.height, (bgra8_pixel_t*) frame_black.data.data(), frame_black.width * 4);
auto whiteview = interleaved_view(frame_white.width, frame_white.height, (bgra8_pixel_t*) frame_white.data.data(), frame_white.width * 4);
transform_pixels(blackview, whiteview, blackview, [=](const bgra8_pixel_t black, const bgra8_pixel_t white) -> bgra8_pixel_t {
int a = 255 - (white[0] - black[0]);
bgra8_pixel_t ret;
if (a == 0) {
ret[0] = 0;
ret[1] = 0;
ret[2] = 0;
ret[3] = 0;
} else {
ret[0] = black[0] / (a / 255.);
ret[1] = black[1] / (a / 255.);
ret[2] = black[2] / (a / 255.);
ret[3] = a;
}
return ret;
});
return frame_black;
}
static std::unique_ptr<SubtitlesProvider> get_subs_provider(wxEvtHandler *evt_handler, agi::BackgroundRunner *br) {
try {
return SubtitlesProviderFactory::GetProvider(br);

View file

@ -78,6 +78,9 @@ class AsyncVideoProvider {
std::vector<std::shared_ptr<VideoFrame>> buffers;
// Returns a monochromatic frame with the current dimensions
VideoFrame GetBlankFrame(bool white);
public:
/// @brief Load the passed subtitle file
/// @param subs File to load
@ -108,6 +111,15 @@ public:
/// @brief raw Get raw frame without subtitles
std::shared_ptr<VideoFrame> GetFrame(int frame, double time, bool raw = false);
/// @brief Synchronously get the subtitles with transparent background
/// @brief time Exact start time of the frame in seconds
///
/// This function is not used for drawing the subtitles on the screen and is not
/// guaranteed that drawing the resulting image on the current raw frame exactly
/// results in the current rendered frame with subtitles. This function is for
/// purposes like copying the current subtitles to the clipboard.
VideoFrame GetSubtitles(double time);
/// Ask the video provider to change YCbCr matricies
void SetColorSpace(std::string const& matrix);

View file

@ -79,6 +79,7 @@ class AlsaPlayer final : public AudioPlayer {
std::atomic<double> volume{1.0};
int64_t start_position = 0;
std::atomic<int64_t> end_position{0};
bool fallback_mono16 = false; // whether to convert to 16 bit mono. FIXME: more flexible conversion
std::mutex position_mutex;
int64_t last_position = 0;
@ -88,6 +89,8 @@ class AlsaPlayer final : public AudioPlayer {
std::thread thread;
snd_pcm_format_t GetPCMFormat(const agi::AudioProvider *provider);
void PlaybackThread();
void UpdatePlaybackPosition(snd_pcm_t *pcm, int64_t position)
@ -115,6 +118,36 @@ public:
void SetEndPosition(int64_t pos) override;
};
snd_pcm_format_t AlsaPlayer::GetPCMFormat(const agi::AudioProvider *provider) {
if (provider->AreSamplesFloat()) {
switch (provider->GetBytesPerSample()) {
case 4:
return SND_PCM_FORMAT_FLOAT_LE;
case 8:
return SND_PCM_FORMAT_FLOAT64_LE;
default:
fallback_mono16 = true;
return SND_PCM_FORMAT_S16_LE;
}
} else {
switch (provider->GetBytesPerSample()) {
case 1:
return SND_PCM_FORMAT_U8;
case 2:
return SND_PCM_FORMAT_S16_LE;
case 3:
return SND_PCM_FORMAT_S24_LE;
case 4:
return SND_PCM_FORMAT_S32_LE;
case 8:
return SND_PCM_FORMAT_S32_LE;
default:
fallback_mono16 = true;
return SND_PCM_FORMAT_S16_LE;
}
}
}
void AlsaPlayer::PlaybackThread()
{
std::unique_lock<std::mutex> lock(mutex);
@ -126,24 +159,11 @@ void AlsaPlayer::PlaybackThread()
BOOST_SCOPE_EXIT_ALL(&) { snd_pcm_close(pcm); };
do_setup:
snd_pcm_format_t pcm_format;
switch (/*provider->GetBytesPerSample()*/ sizeof(int16_t))
{
case 1:
LOG_D("audio/player/alsa") << "format U8";
pcm_format = SND_PCM_FORMAT_U8;
break;
case 2:
LOG_D("audio/player/alsa") << "format S16_LE";
pcm_format = SND_PCM_FORMAT_S16_LE;
break;
default:
return;
}
snd_pcm_format_t pcm_format = GetPCMFormat(provider);
if (snd_pcm_set_params(pcm,
pcm_format,
SND_PCM_ACCESS_RW_INTERLEAVED,
/*provider->GetChannels()*/ 1,
fallback_mono16 ? 1 : provider->GetChannels(),
provider->GetSampleRate(),
1, // allow resample
100*1000 // 100 milliseconds latency
@ -151,8 +171,7 @@ do_setup:
return;
LOG_D("audio/player/alsa") << "set pcm params";
//size_t framesize = provider->GetChannels() * provider->GetBytesPerSample();
size_t framesize = sizeof(int16_t);
size_t framesize = fallback_mono16 ? sizeof(int16_t) : provider->GetChannels() * provider->GetBytesPerSample();
while (true)
{
@ -176,7 +195,11 @@ do_setup:
{
auto avail = std::min(snd_pcm_avail(pcm), (snd_pcm_sframes_t)(end_position-position));
decode_buffer.resize(avail * framesize);
provider->GetInt16MonoAudioWithVolume(reinterpret_cast<int16_t*>(decode_buffer.data()), position, avail, volume);
if (fallback_mono16) {
provider->GetInt16MonoAudioWithVolume(reinterpret_cast<int16_t*>(decode_buffer.data()), position, avail, volume);
} else {
provider->GetAudioWithVolume(decode_buffer.data(), position, avail, volume);
}
snd_pcm_sframes_t written = 0;
while (written <= 0)
@ -236,7 +259,11 @@ do_setup:
{
decode_buffer.resize(avail * framesize);
provider->GetInt16MonoAudioWithVolume(reinterpret_cast<int16_t*>(decode_buffer.data()), position, avail, volume);
if (fallback_mono16) {
provider->GetInt16MonoAudioWithVolume(reinterpret_cast<int16_t*>(decode_buffer.data()), position, avail, volume);
} else {
provider->GetAudioWithVolume(decode_buffer.data(), position, avail, volume);
}
snd_pcm_sframes_t written = 0;
while (written <= 0)
{
@ -353,4 +380,4 @@ std::unique_ptr<AudioPlayer> CreateAlsaPlayer(agi::AudioProvider *provider, wxWi
return agi::make_unique<AlsaPlayer>(provider);
}
#endif // WITH_ALSA
#endif // WITH_ALSA

View file

@ -71,6 +71,8 @@ class OpenALPlayer final : public AudioPlayer, wxTimer {
float volume = 1.f; ///< Current audio volume
ALsizei samplerate; ///< Sample rate of the audio
int bpf; ///< Bytes per frame
bool fallback_mono16 = false; ///< whether to fall back to int16 mono. FIXME: More flexible conversion
int format; ///< AL format (stereo/mono, 8/16 bit)
int64_t start_frame = 0; ///< First frame of playbacka
int64_t cur_frame = 0; ///< Next frame to write to playback buffers
@ -125,8 +127,39 @@ public:
OpenALPlayer::OpenALPlayer(agi::AudioProvider *provider)
: AudioPlayer(provider)
, samplerate(provider->GetSampleRate())
, bpf(/*provider->GetChannels() * provider->GetBytesPerSample()*/sizeof(int16_t))
{
switch (provider->GetChannels()) {
case 1:
switch (provider->GetBytesPerSample()) {
case 1:
format = AL_FORMAT_MONO8;
break;
case 2:
format = AL_FORMAT_MONO16;
break;
default:
format = AL_FORMAT_MONO16;
fallback_mono16 = true;
}
break;
case 2:
switch (provider->GetBytesPerSample()) {
case 1:
format = AL_FORMAT_STEREO8;
break;
case 2:
format = AL_FORMAT_STEREO16;
break;
default:
format = AL_FORMAT_MONO16;
fallback_mono16 = true;
}
break;
default:
format = AL_FORMAT_MONO16;
fallback_mono16 = true;
}
bpf = fallback_mono16 ? sizeof(int16_t) : provider->GetChannels() * provider->GetBytesPerSample();
device = alcOpenDevice(nullptr);
if (!device) throw AudioPlayerOpenError("Failed opening default OpenAL device");
@ -239,16 +272,21 @@ void OpenALPlayer::FillBuffers(ALsizei count)
for (count = mid(1, count, buffers_free); count > 0; --count) {
ALsizei fill_len = mid<ALsizei>(0, decode_buffer.size() / bpf, end_frame - cur_frame);
if (fill_len > 0)
if (fill_len > 0) {
// Get fill_len frames of audio
provider->GetInt16MonoAudioWithVolume(reinterpret_cast<int16_t*>(decode_buffer.data()), cur_frame, fill_len, volume);
if (fallback_mono16) {
provider->GetInt16MonoAudioWithVolume(reinterpret_cast<int16_t*>(decode_buffer.data()), cur_frame, fill_len, volume);
} else {
provider->GetAudioWithVolume(decode_buffer.data(), cur_frame, fill_len, volume);
}
}
if ((size_t)fill_len * bpf < decode_buffer.size())
// And zerofill the rest
memset(&decode_buffer[fill_len * bpf], 0, decode_buffer.size() - fill_len * bpf);
cur_frame += fill_len;
alBufferData(buffers[buf_first_free], AL_FORMAT_MONO16, &decode_buffer[0], decode_buffer.size(), samplerate);
alBufferData(buffers[buf_first_free], format, &decode_buffer[0], decode_buffer.size(), samplerate);
alSourceQueueBuffers(source, 1, &buffers[buf_first_free]); // FIXME: collect buffer handles and queue all at once instead of one at a time?
buf_first_free = (buf_first_free + 1) % num_buffers;
--buffers_free;
@ -308,4 +346,4 @@ std::unique_ptr<AudioPlayer> CreateOpenALPlayer(agi::AudioProvider *provider, wx
return agi::make_unique<OpenALPlayer>(provider);
}
#endif // WITH_OPENAL
#endif // WITH_OPENAL

View file

@ -64,6 +64,32 @@ static const PaHostApiTypeId pa_host_api_priority[] = {
};
static const size_t pa_host_api_priority_count = sizeof(pa_host_api_priority) / sizeof(pa_host_api_priority[0]);
PaSampleFormat PortAudioPlayer::GetSampleFormat(agi::AudioProvider *provider) {
if (provider->AreSamplesFloat()) {
switch (provider->GetBytesPerSample()) {
case 4:
return paFloat32;
default:
fallback_mono16 = true;
return paInt16;
}
} else {
switch (provider->GetBytesPerSample()) {
case 1:
return paUInt8;
case 2:
return paInt16;
case 3:
return paInt24;
case 4:
return paInt32;
default:
fallback_mono16 = true;
return paInt16;
}
}
}
PortAudioPlayer::PortAudioPlayer(agi::AudioProvider *provider) : AudioPlayer(provider) {
PaError err = Pa_Initialize();
@ -140,8 +166,8 @@ void PortAudioPlayer::OpenStream() {
const PaDeviceInfo *device_info = Pa_GetDeviceInfo((*device_ids)[i]);
PaStreamParameters pa_output_p;
pa_output_p.device = (*device_ids)[i];
pa_output_p.channelCount = /*provider->GetChannels()*/ 1;
pa_output_p.sampleFormat = paInt16;
pa_output_p.sampleFormat = GetSampleFormat(provider);
pa_output_p.channelCount = fallback_mono16 ? 1 : provider->GetChannels();
pa_output_p.suggestedLatency = device_info->defaultLowOutputLatency;
pa_output_p.hostApiSpecificStreamInfo = nullptr;
@ -222,7 +248,11 @@ int PortAudioPlayer::paCallback(const void *inputBuffer, void *outputBuffer,
// Play something
if (lenAvailable > 0) {
player->provider->GetInt16MonoAudioWithVolume(reinterpret_cast<int16_t*>(outputBuffer), player->current, lenAvailable, player->GetVolume());
if (player->fallback_mono16) {
player->provider->GetInt16MonoAudioWithVolume(reinterpret_cast<int16_t*>(outputBuffer), player->current, lenAvailable, player->GetVolume());
} else {
player->provider->GetAudioWithVolume(outputBuffer, player->current, lenAvailable, player->GetVolume());
}
// Set play position
player->current += lenAvailable;
@ -283,4 +313,4 @@ std::unique_ptr<AudioPlayer> CreatePortAudioPlayer(agi::AudioProvider *provider,
return agi::make_unique<PortAudioPlayer>(provider);
}
#endif // WITH_PORTAUDIO
#endif // WITH_PORTAUDIO

View file

@ -64,6 +64,7 @@ class PortAudioPlayer final : public AudioPlayer {
PaTime pa_start; ///< PortAudio internal start position
PaStream *stream = nullptr; ///< PortAudio stream
bool fallback_mono16 = false; ///< whether to fall back to 16 bit mono
/// @brief PortAudio callback, used to fill buffer for playback, and prime the playback buffer.
/// @param inputBuffer Input buffer.
@ -87,6 +88,8 @@ class PortAudioPlayer final : public AudioPlayer {
/// @param userData Local data to be handed to the callback.
static void paStreamFinishedCallback(void *userData);
PaSampleFormat GetSampleFormat(agi::AudioProvider *provider);
/// Gather the list of output devices supported by a host API
/// @param host_idx Host API ID
void GatherDevices(PaHostApiIndex host_idx);

View file

@ -48,7 +48,7 @@
namespace {
class PulseAudioPlayer final : public AudioPlayer {
float volume = 1.f;
pa_cvolume volume;
bool is_playing = false;
volatile unsigned long start_frame = 0;
@ -56,6 +56,7 @@ class PulseAudioPlayer final : public AudioPlayer {
volatile unsigned long end_frame = 0;
unsigned long bpf = 0; // bytes per frame
bool fallback_mono16 = false; // whether to convert to 16 bit mono. FIXME: more flexible conversion
wxSemaphore context_notify{0, 1};
wxSemaphore stream_notify{0, 1};
@ -73,6 +74,7 @@ class PulseAudioPlayer final : public AudioPlayer {
int paerror = 0;
static void pa_setvolume_success(pa_context *c, int success, PulseAudioPlayer *thread);
/// Called by PA to notify about other context-related stuff
static void pa_context_notify(pa_context *c, PulseAudioPlayer *thread);
/// Called by PA when a stream operation completes
@ -82,6 +84,8 @@ class PulseAudioPlayer final : public AudioPlayer {
/// Called by PA to notify about other stream-related stuff
static void pa_stream_notify(pa_stream *p, PulseAudioPlayer *thread);
/// Find the sample format and set fallback_mono16 if necessary
pa_sample_format_t GetSampleFormat(const agi::AudioProvider *provider);
public:
PulseAudioPlayer(agi::AudioProvider *provider);
~PulseAudioPlayer();
@ -94,9 +98,35 @@ public:
int64_t GetCurrentPosition();
void SetEndPosition(int64_t pos);
void SetVolume(double vol) { volume = vol; }
void SetVolume(double vol);
};
pa_sample_format_t PulseAudioPlayer::GetSampleFormat(const agi::AudioProvider *provider) {
if (provider->AreSamplesFloat()) {
switch (provider->GetBytesPerSample()) {
case 4:
return PA_SAMPLE_FLOAT32LE;
default:
fallback_mono16 = true;
return PA_SAMPLE_S16LE;
}
} else {
switch (provider->GetBytesPerSample()) {
case 1:
return PA_SAMPLE_U8;
case 2:
return PA_SAMPLE_S16LE;
case 3:
return PA_SAMPLE_S24LE;
case 4:
return PA_SAMPLE_S32LE;
default:
fallback_mono16 = true;
return PA_SAMPLE_S16LE;
}
}
}
PulseAudioPlayer::PulseAudioPlayer(agi::AudioProvider *provider) : AudioPlayer(provider) {
// Initialise a mainloop
mainloop = pa_threaded_mainloop_new();
@ -133,13 +163,14 @@ PulseAudioPlayer::PulseAudioPlayer(agi::AudioProvider *provider) : AudioPlayer(p
}
// Set up stream
bpf = /*provider->GetChannels() * provider->GetBytesPerSample()*/sizeof(int16_t);
pa_sample_spec ss;
ss.format = PA_SAMPLE_S16LE; // FIXME
ss.format = GetSampleFormat(provider);
bpf = fallback_mono16 ? sizeof(int16_t) : provider->GetChannels() * provider->GetBytesPerSample();
ss.rate = provider->GetSampleRate();
ss.channels = /*provider->GetChannels()*/1;
ss.channels = fallback_mono16 ? 1 : provider->GetChannels();
pa_channel_map map;
pa_channel_map_init_auto(&map, ss.channels, PA_CHANNEL_MAP_DEFAULT);
pa_cvolume_init(&volume);
stream = pa_stream_new(context, "Sound", &ss, &map);
if (!stream) {
@ -269,6 +300,11 @@ int64_t PulseAudioPlayer::GetCurrentPosition()
return start_frame + playtime * provider->GetSampleRate() / (1000*1000);
}
void PulseAudioPlayer::SetVolume(double vol) {
pa_cvolume_set(&volume, fallback_mono16 ? 1 : provider->GetChannels(), pa_sw_volume_from_linear(vol));
pa_context_set_sink_input_volume(context, pa_stream_get_index(stream), &volume, nullptr, nullptr);
}
/// @brief Called by PA to notify about other context-related stuff
void PulseAudioPlayer::pa_context_notify(pa_context *c, PulseAudioPlayer *thread)
{
@ -308,7 +344,11 @@ void PulseAudioPlayer::pa_stream_write(pa_stream *p, size_t length, PulseAudioPl
unsigned long maxframes = thread->end_frame - thread->cur_frame;
if (frames > maxframes) frames = maxframes;
void *buf = malloc(frames * bpf);
thread->provider->GetInt16MonoAudioWithVolume(reinterpret_cast<int16_t*>(buf), thread->cur_frame, frames, thread->volume);
if (thread->fallback_mono16) {
thread->provider->GetInt16MonoAudio(reinterpret_cast<int16_t *>(buf), thread->cur_frame, frames);
} else {
thread->provider->GetAudio(buf, thread->cur_frame, frames);
}
::pa_stream_write(p, buf, frames*bpf, free, 0, PA_SEEK_RELATIVE);
thread->cur_frame += frames;
}
@ -324,4 +364,4 @@ void PulseAudioPlayer::pa_stream_notify(pa_stream *p, PulseAudioPlayer *thread)
std::unique_ptr<AudioPlayer> CreatePulseAudioPlayer(agi::AudioProvider *provider, wxWindow *) {
return agi::make_unique<PulseAudioPlayer>(provider);
}
#endif // WITH_LIBPULSE
#endif // WITH_LIBPULSE

View file

@ -304,9 +304,13 @@ struct video_focus_seek final : public validator_video_loaded {
}
};
wxImage get_image(agi::Context *c, bool raw) {
wxImage get_image(agi::Context *c, bool raw, bool subsonly = false) {
auto frame = c->videoController->GetFrameN();
return GetImage(*c->project->VideoProvider()->GetFrame(frame, c->project->Timecodes().TimeAtFrame(frame), raw));
if (subsonly) {
return GetImageWithAlpha(c->project->VideoProvider()->GetSubtitles(c->project->Timecodes().TimeAtFrame(frame)));
} else {
return GetImage(*c->project->VideoProvider()->GetFrame(frame, c->project->Timecodes().TimeAtFrame(frame), raw));
}
}
struct video_frame_copy final : public validator_video_loaded {
@ -331,6 +335,19 @@ struct video_frame_copy_raw final : public validator_video_loaded {
}
};
struct video_frame_copy_subs final : public validator_video_loaded {
CMD_NAME("video/frame/copy/subs")
STR_MENU("Copy image to Clipboard (only subtitles)")
STR_DISP("Copy image to Clipboard (only subtitles)")
STR_HELP("Copy the currently displayed subtitles to the clipboard, with transparent background")
void operator()(agi::Context *c) override {
wxBitmap img(get_image(c, false, true));
img.UseAlpha();
SetClipboard(img);
}
};
struct video_frame_next final : public validator_video_loaded {
CMD_NAME("video/frame/next")
STR_MENU("Next Frame")
@ -473,7 +490,7 @@ struct video_frame_prev_large final : public validator_video_loaded {
}
};
static void save_snapshot(agi::Context *c, bool raw) {
static void save_snapshot(agi::Context *c, bool raw, bool subsonly = false) {
auto option = OPT_GET("Path/Screenshot")->GetString();
agi::fs::path basepath;
@ -508,7 +525,7 @@ static void save_snapshot(agi::Context *c, bool raw) {
path = agi::format("%s_%03d_%d.png", basepath.string(), session_shot_count++, c->videoController->GetFrameN());
} while (agi::fs::FileExists(path));
get_image(c, raw).SaveFile(to_wx(path), wxBITMAP_TYPE_PNG);
get_image(c, raw, subsonly).SaveFile(to_wx(path), wxBITMAP_TYPE_PNG);
}
struct video_frame_save final : public validator_video_loaded {
@ -533,6 +550,17 @@ struct video_frame_save_raw final : public validator_video_loaded {
}
};
struct video_frame_save_subs final : public validator_video_loaded {
CMD_NAME("video/frame/save/subs")
STR_MENU("Save PNG snapshot (only subtitles)")
STR_DISP("Save PNG snapshot (only subtitles)")
STR_HELP("Save the currently displayed subtitles with transparent background to a PNG file in the video's directory")
void operator()(agi::Context *c) override {
save_snapshot(c, false, true);
}
};
struct video_jump final : public validator_video_loaded {
CMD_NAME("video/jump")
CMD_ICON(jumpto_button)
@ -780,6 +808,7 @@ namespace cmd {
reg(agi::make_unique<video_focus_seek>());
reg(agi::make_unique<video_frame_copy>());
reg(agi::make_unique<video_frame_copy_raw>());
reg(agi::make_unique<video_frame_copy_subs>());
reg(agi::make_unique<video_frame_next>());
reg(agi::make_unique<video_frame_next_boundary>());
reg(agi::make_unique<video_frame_next_keyframe>());
@ -790,6 +819,7 @@ namespace cmd {
reg(agi::make_unique<video_frame_prev_large>());
reg(agi::make_unique<video_frame_save>());
reg(agi::make_unique<video_frame_save_raw>());
reg(agi::make_unique<video_frame_save_subs>());
reg(agi::make_unique<video_jump>());
reg(agi::make_unique<video_jump_end>());
reg(agi::make_unique<video_jump_start>());

View file

@ -226,6 +226,9 @@
{ "command" : "video/frame/save/raw" },
{ "command" : "video/frame/copy/raw" },
{},
{ "command" : "video/frame/save/subs" },
{ "command" : "video/frame/copy/subs" },
{},
{ "command" : "video/copy_coordinates" }
]
}

View file

@ -235,6 +235,9 @@
{ "command" : "video/frame/save/raw" },
{ "command" : "video/frame/copy/raw" },
{},
{ "command" : "video/frame/save/subs" },
{ "command" : "video/frame/copy/subs" },
{},
{ "command" : "video/copy_coordinates" }
]
}

View file

@ -48,3 +48,17 @@ wxImage GetImage(VideoFrame const& frame) {
copy_and_convert_pixels(src, dst, color_converter());
return img;
}
wxImage GetImageWithAlpha(VideoFrame const &frame) {
wxImage img = GetImage(frame);
img.InitAlpha();
uint8_t *dst = img.GetAlpha();
const uint8_t *src = frame.data.data() + 3;
for (int y = 0; y < frame.height; y++) {
for (int x = 0; x < frame.width; x++) {
*(dst++) = *src;
src += 4;
}
}
return img;
}

View file

@ -29,3 +29,4 @@ struct VideoFrame {
};
wxImage GetImage(VideoFrame const& frame);
wxImage GetImageWithAlpha(VideoFrame const& frame);