-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Samples using different Buffer types causes ClassCastException #1355
Comments
You'll need to make sure that all |
Hey, thanks for the quick reply. |
One easy way is by calling |
I'm sorry, but the error still persists. What exactly shoud I set the sampleFormat to? I tried it with the other grabbers sampleFormat, but that didn't help, The example you provided also seems to be outdated, as it only shows off grabber.setSampleMode and passes a value from a class or enum ("SampleMode") in that doesn't exist or is not accessible to me. EDIT SECOND EDIT |
Ah yes, sorry, that's what |
Okay so I just had some time to test some more, and as it turns out the method
EDIT |
It works just fine, take a look at the sample code I gave you above: https://github.com/bytedeco/javacv/blob/master/platform/src/test/java/org/bytedeco/javacv/FrameGrabberTest.java |
Of course you can't. I just checked, and I truly do have the wrong version in my pom.xml. I can't change anything at the moment, but I imagine that's it. |
Yeah, that was it. Sorry for the trouble. However, the audio is played back much slower than it actually is, causing it to sound weird. Should I open a new issue for this once I feel like I'm done testing, or...? |
Right,
You're probably just not setting the sample rate properly. Make sure that you do call |
Did not notice your reply here, sorry about that. EDIT But is this (the code at the top) the correct way of achieving my goal (exporting a video with sound from multiple audio sources) anyways? Not understanding how Buffers work or can be transferred to sound, it's just the first thing I came up with and I can't believe it's actually right. |
Setting the sample rate on the grabber might not have any effect, especially for files. You'll need to use for the recorder the sample rate that you get from the grabber. |
The samples rates are the same anyways (even without me setting them). Also the problem with that would be, that if I had multiple sources with different sample rates, setting the recorder's sample rate wouldn't work. |
If you need that, start looking at FFmpegFrameFilter to do some resampling.
|
Okay, this is what I got so far:
However, this throws the following exception:
This is how I intend to use the filter (modification to the code above):
Will this work (and how do I get rid of that exception)? |
Actually |
Of course. I decided to leave every bit of code in, because after having an error in the pom.xml, this one could be anywhere.
The output and input file(s) as well as the pom.xml are attached. |
Could you simplify that to only 1 file? |
I'm not sure if I understand what you mean, but I guess you could remove the audioGrabber and see what happens in that case like this:
Result: The audio of the video plays back, however with a bit of latency. Keep in mind though that the kind of hard-coded code in this and the previous post is just for testing purposes, and will later be replaced with code that can handle a variable amount of files. |
Ok, so the problem is with "latency"? That just means you're missing frames. Make sure that the audio frames and the video frames you write span the same exact amount of time. See issue #1333 for a discussion about that. /cc @anotherche |
I'm not sure I understood what the original goal was. From the first message, it seems that @SchredDev would like to add an additional audio track to the file. But the code looks so that new portions of sound are sequentially added to existing sound buffers. It seems to me, whatever the original goal, such a method will not give a normal result. |
Okay, I'm just going to try to explain my goal in detail. The editor I am working on creates a BufferedImage for each frame in the movie. This is were I am going to get all the video footage from when exporting (the code above). This means that there won't be a video-file in this method, because that data will be provided directly as a frame. At the moment it is just in there so I can get easy access to visual and more audio data (because it provides both).
Not exactly. I am guessing the audio is played back slower, even when there's only one file, because that's what happens when I merge multiple files (just more extrem) and the video is longer than it should be. That would cause the audio to be desynchronized. |
@SchredDev, It’s still not clear what “merge” means. Is there a need for the resulting video to have several different audio tracks, or for sound from many sources to be mixed into one stream so that everything sounds together? |
The later one. |
Maybe I don’t know something about the possibilities of ffmpeg or JavaCV, but it seems to me that just appending extra samples to the end of existing buffer can not lead to a normal result, including mixing of two sound sources. Theoretically, mixing assumes resampling (decoding to uncompressed format, summation, encoding) which should include the following operations: 1. portions of the same duration are taken from two sound sources; 2. they are decoded in PCM format with the same bitdepth and sample rate; 3 resulting samples are scaled (to prevent possible clipping) and summed (old sample + new sample, sample by sample). 4 result is recorded (encoded). |
As far as I understand, he's just trying to add audio channels, but he'll
need the same amount of samples for all channels to do it that way, or they
won't be synchronized.
|
@anotherche Seems like that's what could be happening here! I've uploaded a video showcasing the expected and actual outputs. If it is, could you please point me to an example showcasing what you just explained? |
If you're looking into mixing audio samples, the amix filter wold be the easiest thing to use: |
Okay, thank you. |
Okay so judging by the examples I cound find, I'd have to initialize the filter like this, right? But how would I set the inputs in that case? |
See issue #1214 about that. |
Okay, I copied the code they showed in their issue, and I don't get an error anymore (when calling start). However, I still do not get how to give the filter the inputs and get the output. This is what I tried:
which throws the error:
at the first line. |
Check the log to get more information about that error. |
Oh. My mistake, didn't see that. It says:
I found this and that issue concerning the second output, but those don't seem to be the case here. |
Make sure to call |
Thanks, the error is gone now. However,
|
Don't call |
It finally works, thank you so much! I used |
I am working on a video editor and want to record multiple sounds (samples) per frame, because most videos contain sound and you want to add your own ones as well. My approach to solving this was to merge the
samples
array of all the audio elements together like this:However, this gives me a ClassCastException at "recorder.record(frame);", saying that it can't cast from "java.nio.DirectShortBufferU" to a "FloatBuffer". While testing I found out, that the error accurs the first time it gets to a sample which came from the audioGrabber, meaning a file that was imported as a mp3 and not alongside a video as mp4. How do I solve this? Or should I tackle my goal differently?
The text was updated successfully, but these errors were encountered: