-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Right way to implement overlay filter #945
Comments
So I used movie filter and now string looks like this: |
Are you sure you are getting the output in the right pixel format? See
issue #887.
|
Yes, the image is transformed exactly like that. I will adjust it and share the result. But another problem is that the picture is not presented in the final video in any way (not transformed, not tripled, just no image). By the way, maybe it would be useful to extract some frequently used filters into classes and also provide API for constructing a filters graph? |
Yes, anything to make it easier to use or documentation is welcome! Thanks
|
BTW, this sounds like a duplicate of issue #667 and I'm pretty sure that's fixed by adjusting the pixel formats accordingly. If this isn't the case though, please let me know. |
Firstly, I tested ffmpeg on a pc with the following command and it worked fine: Next, setting pixelFormat in Recorder.record didn't actually help, but settings right pixelFormat to Filter solved the problem of tripled and turned video (it is not necessary to specify format in recorder). Now filter's setup looks like this:
The path is specified correctly and looks like
So the path is also OK. BUT there is still no image on the final video!
where filterManager just pushs and pulls frames in try-catch to the filter. I tried to add an offset to the image in case if it is outside of the screen: |
Maybe we could get more information about what is happening by increasing the log verbosity? |
Anyway, if you could call |
|
Also make sure to call |
Thanks! I got this for overlaying video (it looks same as image):
Where the first part is initialization, and the second are two frames. |
It says something about scaling. Maybe there is a bug in there, so try to
use an image that has the same size as the video.
|
Nothing changed |
Ok, maybe it doesn't support the pixel format of the video or of the image.
Try to convert both of them to some other pixel format, but the same for
both.
|
I'm using yuv420, it's sad if it is not supported |
Like I said try to use a filter to convert it. The command line tool may be
doing that automatically.
|
I used rgb24 and it didn't helped |
Ok, please post the whole filter command you used and the whole log output again. |
Ah, I might have an answer here:
https://ffmpeg.org/ffmpeg-filters.html#movie-1 We should probably use |
I've tried that already and nothing changed. Also, it works without loop argument on a desktop version |
So it works with JavaCV on Windows, just not on Android? That sounds like a permission issue... |
No, it works fine on ffmpeg. I didn't try it on a desktop JavaCV, but according to other comments it is not the case. |
This is a new filter command:
And this is log for it:
|
An image scaler is still being inserted, that's strange. Anyway, if we compare that with a log of the ffmpeg program, I'm sure we'll find the difference pretty quickly. What does the output for the ffmpeg program look like at the same log level? |
The command is And the log is
I included full log replacing repetetive parts with ellipses |
I agree that would make more sense. I can propose a patch for ffmpeg 4.1 to do it automatically. (There may be some users who compile libavcodec statically and combine it with other archives into a single shared library, so perhaps exporting the global JNI_OnLoad could conflicts.) Surprisingly this wasn't really discussed on the ffmpeg list when the api was implemented: https://ffmpeg.org/pipermail/ffmpeg-devel/2016-March/190546.html cc @mbouron |
Yeah, it wouldn't work too well for static builds... |
I use JavaCV FFmpeg with original OpenCV4Andoid
Does JavaCV already supports FFmpegFrameRecorder with MediaCodec? |
Yes, that's true OpenCV doesn't support YUV. But drawText does? Good to know. According to @tmm1 yes it does. |
FFmpeg Filter's drawText? @tmm1 will there be a little documentation how to use MediaCodec with FFmpegFrameRecorder? |
seems OpenCV doesn't directly put text to data (
so when using such we still have to get new byte and use this byte and not old byte from onPreviewCallback
How does JavaCV OpenCVFrameConverter work? Does it create Mat from Frame and doesn't copy anything? |
Yes, that's how it works. We can access the data pointer from the Java bindings of OpenCV as well: bytedeco/javacpp#38 (comment) |
@saudet but actually copying/converting of frames (bytes, mats) isn't that big problem in video recording because in next example (StackOverflow) with MediaRecorder I converted byte[] to Mat and then Mat to Btmap and then used that Bitmap to draw on MediaRecorder's Surface (lockCanvas) so it depends on how fast recorder can encode frames, I hope FFmpeg + MediaRecorder will work much better |
I didn't test it, but something like this should work with JavaCV 1.4.2 now: FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(...);
grabber.setVideoCodecName("h264_mediacodec");
... |
@anonym24 What does FFmpeg output to the log? |
Logs during start and stop
when I use |
What does |
output
|
Ok, looks good, what about |
here's encoder:
it returns null
|
Hum, looks like encoding isn't supported: |
Yes it looks like mediacodec encoders do not exist in ffmpeg. It wouldn't be very hard to add, but no one has submitted any patches so far. |
@tmm1 can you do it? (if you're interested of course) |
BTW, I've implemented support for multiple inputs in commit 3c92eef, so we can now use a filter graph like "[0][1]overlay", although I've decided to follow the Please give it a try with the snapshots: http://bytedeco.org/builds/ Thanks! |
It's great, now it is possible to create proper slow-mo filter and others with splitted audio and video editing without manually splitting streams. Thanks! |
I want to overlay two seperate PNG files on the live video stream at the same time. I have managed to overlay a single image on the video using the |
You can chain filters together, just direct first filter output to second's input. |
Thanks @DeKinci chaining filters together worked for me ! |
BTW, FFmpeg 6.0 now supports hardware accelerated encoding using MediaCodec, so we don't need to use its API directly anymore. Please give it a try with the snapshots: http://bytedeco.org/builds/ |
I'm currently working on adding an overlay image on top of the video.
It is clear, how to do it on bare FFMpeg, but what is the right way to provide an image for the FFMpegFrameFilter for an overlay filter?
Is it possible to write there
-i path/to/img.jpg -filter_complex "[0][1]overlay"
? Or there is another way?And what is the shortcut for a video stream (0 or 1 in this case)?
I know, that it sounds more like a StackOverflow question but it looks like it is impossible to get a reply on a JavaCV topic there
The text was updated successfully, but these errors were encountered: