Scalar: Multiple ouput with different resolutions, Issue 2040

Message ID B15060CE1DBCC944910FD7BB5CD0DB7504AA1E@msg10003.rgbnetworks.com
State Superseded
Headers show

Commit Message

Manjunath Siddaiah March 17, 2011, 8:07 p.m.
Hi,


If "CONFIG_AVFILTER" is enabled, scaling goes through filtering path. Since "AVFilterContext" variables are in

"AVInputStream" structure we can't have different context for different outputs in case of multiple outputs. Because of this there will be video distortions or segmentation fault if we do multiple outputs with different resolutions.



The solution is to move the "AVFilterContext" from "AVInputStream" to "AVOutputStream" structure so that there will be different filter contexts for each and every stream for multiple outputs.

Tested the patch with taking single input stream and doing outputs with different resolutions as follows.

./ffmpeg -i ../MP4_009-720x480_100_128_44.1_30.mp4 -s 176x144 ../out1.mp4 -s 320x240 ../out2.mp4 -s 352x288 ../out3.mp4 -s 640x480 ../out4.mp4

The same command now gives segmentation fault or distorted video outputs.

Comments

Luca Barbato March 17, 2011, 9:59 p.m. | #1
On 3/17/11 9:07 PM, Manjunath Siddaiah wrote:
> Hi,
>
>
> If "CONFIG_AVFILTER" is enabled, scaling goes through filtering path.
> Since "AVFilterContext" variables are in
>
> "AVInputStream" structure we can't have different context for different
> outputs in case of multiple outputs. Because of this there will be video
> distortions or segmentation fault if we do multiple outputs with different resolutions.
>
>
>
> The solution is to move the "AVFilterContext" from "AVInputStream" to "AVOutputStream"
> structure so that there will be different filter contexts for each and every stream for
> multiple outputs.
>
> Tested the patch with taking single input stream and doing outputs with different
> resolutions as follows.
>
> ./ffmpeg -i ../MP4_009-720x480_100_128_44.1_30.mp4 -s 176x144 ../out1.mp4 -s 320x240
> ../out2.mp4 -s 352x288 ../out3.mp4 -s 640x480 ../out4.mp4

Seems correct, I'd like to have Stefano opinion on it.

lu
Mans Rullgard March 17, 2011, 10:22 p.m. | #2
Luca Barbato <lu_zero@gentoo.org> writes:

> On 3/17/11 9:07 PM, Manjunath Siddaiah wrote:
>> Hi,
>>
>>
>> If "CONFIG_AVFILTER" is enabled, scaling goes through filtering path.
>> Since "AVFilterContext" variables are in
>>
>> "AVInputStream" structure we can't have different context for different
>> outputs in case of multiple outputs. Because of this there will be video
>> distortions or segmentation fault if we do multiple outputs with different resolutions.
>>
>>
>>
>> The solution is to move the "AVFilterContext" from "AVInputStream" to "AVOutputStream"
>> structure so that there will be different filter contexts for each and every stream for
>> multiple outputs.
>>
>> Tested the patch with taking single input stream and doing outputs with different
>> resolutions as follows.
>>
>> ./ffmpeg -i ../MP4_009-720x480_100_128_44.1_30.mp4 -s 176x144 ../out1.mp4 -s 320x240
>> ../out2.mp4 -s 352x288 ../out3.mp4 -s 640x480 ../out4.mp4
>
> Seems correct, I'd like to have Stefano opinion on it.

I've seen something similar to this floated before.  Don't know what
became of it.
Reinhard Tartler April 21, 2011, 10:46 a.m. | #3
On Thu, Mar 17, 2011 at 22:59:20 (CET), Luca Barbato wrote:

> On 3/17/11 9:07 PM, Manjunath Siddaiah wrote:
>> Hi,
>>
>>
>> If "CONFIG_AVFILTER" is enabled, scaling goes through filtering path.
>> Since "AVFilterContext" variables are in
>>

[...]

>> The solution is to move the "AVFilterContext" from "AVInputStream" to "AVOutputStream"
>> structure so that there will be different filter contexts for each and every stream for
>> multiple outputs.
>>

[...]

> Seems correct, I'd like to have Stefano opinion on it.

I pinged stefano on irc about this:

12:38 <siretart> saste: can you comment on http://patches.libav.org/patch/1705/ ?                                           
12:40 <saste> siretart: a variant to that was applied to FFmpeg                                                             
12:41 <saste> i believe it has been posted already to the libav-devel ML, by Anton or someone else                          
12:41 <siretart> saste: I failed to locate it in the git log. do you remember what was in the subject line?                 
12:42 <saste> siretart: [libav-devel] [PATCH 27/35] ffmpeg.c: restructure video filter implementation.                      

AKA http://patches.libav.org/patch/2451/

Patch

Index: ffmpeg.c
===================================================================
--- ffmpeg.c	(revision 26402)
+++ ffmpeg.c	(working copy)
@@ -303,6 +303,15 @@ 
     AVAudioConvert *reformat_ctx;
     AVFifoBuffer *fifo;     /* for compression: one audio fifo per codec */
     FILE *logfile;
+    /* placing filters on output streams context rather than on input stream context */
+    /* This will make sepearte filtering contexts for each and every stream for multiple stream outputs */
+#if CONFIG_AVFILTER 
+    AVFilterContext *output_video_filter;
+    AVFilterContext *input_video_filter;
+    AVFrame *filter_frame;
+    int has_filter_frame;
+    AVFilterBufferRef *picref;
+#endif
 } AVOutputStream;
 
 static AVOutputStream **output_streams_for_file[MAX_FILES] = { NULL };
@@ -324,13 +333,6 @@ 
     int is_start;            /* is 1 at the start and after a discontinuity */
     int showed_multi_packet_warning;
     int is_past_recording_time;
-#if CONFIG_AVFILTER
-    AVFilterContext *output_video_filter;
-    AVFilterContext *input_video_filter;
-    AVFrame *filter_frame;
-    int has_filter_frame;
-    AVFilterBufferRef *picref;
-#endif
 } AVInputStream;
 
 typedef struct AVInputFile {
@@ -347,7 +349,7 @@ 
 #endif
 
 #if CONFIG_AVFILTER
-
+/* filter context is put on AVOutputStream instead of AVInputStream */
 static int configure_filters(AVInputStream *ist, AVOutputStream *ost)
 {
     AVFilterContext *last_filter, *filter;
@@ -362,15 +364,15 @@ 
 
     snprintf(args, 255, "%d:%d:%d:%d:%d", ist->st->codec->width,
              ist->st->codec->height, ist->st->codec->pix_fmt, 1, AV_TIME_BASE);
-    ret = avfilter_graph_create_filter(&ist->input_video_filter, avfilter_get_by_name("buffer"),
+    ret = avfilter_graph_create_filter(&ost->input_video_filter, avfilter_get_by_name("buffer"),
                                        "src", args, NULL, graph);
     if (ret < 0)
         return ret;
-    ret = avfilter_graph_create_filter(&ist->output_video_filter, &ffsink,
+    ret = avfilter_graph_create_filter(&ost->output_video_filter, &ffsink,
                                        "out", NULL, &ffsink_ctx, graph);
     if (ret < 0)
         return ret;
-    last_filter = ist->input_video_filter;
+    last_filter = ost->input_video_filter;
 
     if (codec->width  != icodec->width || codec->height != icodec->height) {
         snprintf(args, 255, "%d:%d:flags=0x%X",
@@ -398,7 +400,7 @@ 
         outputs->next    = NULL;
 
         inputs->name    = av_strdup("out");
-        inputs->filter_ctx = ist->output_video_filter;
+        inputs->filter_ctx = ost->output_video_filter;
         inputs->pad_idx = 0;
         inputs->next    = NULL;
 
@@ -406,15 +408,15 @@ 
             return ret;
         av_freep(&vfilters);
     } else {
-        if ((ret = avfilter_link(last_filter, 0, ist->output_video_filter, 0)) < 0)
+        if ((ret = avfilter_link(last_filter, 0, ost->output_video_filter, 0)) < 0)
             return ret;
     }
 
     if ((ret = avfilter_graph_config(graph, NULL)) < 0)
         return ret;
 
-    codec->width  = ist->output_video_filter->inputs[0]->w;
-    codec->height = ist->output_video_filter->inputs[0]->h;
+    codec->width  = ost->output_video_filter->inputs[0]->w;
+    codec->height = ost->output_video_filter->inputs[0]->h;
 
     return 0;
 }
@@ -1470,7 +1472,7 @@ 
     AVSubtitle subtitle, *subtitle_to_free;
     int64_t pkt_pts = AV_NOPTS_VALUE;
 #if CONFIG_AVFILTER
-    int frame_available;
+    AVRational ist_pts_tb;
 #endif
 
     AVPacket avpkt;
@@ -1606,12 +1608,16 @@ 
         }
 
 #if CONFIG_AVFILTER
-        if (ist->st->codec->codec_type == AVMEDIA_TYPE_VIDEO && ist->input_video_filter) {
-            // add it to be filtered
-            av_vsrc_buffer_add_frame(ist->input_video_filter, &picture,
+        // add it to be filtered on every output video stream
+        if (ist->st->codec->codec_type == AVMEDIA_TYPE_VIDEO){
+            for(i=0;i<nb_ostreams;i++) {
+                ost = ost_table[i];
+                if(ost->st->codec->codec_type == AVMEDIA_TYPE_VIDEO && ost->input_video_filter) 
+                    av_vsrc_buffer_add_frame(ost->input_video_filter, &picture,
                                      ist->pts,
                                      ist->st->codec->sample_aspect_ratio);
         }
+        }
 #endif
 
         // preprocess audio (volume)
@@ -1635,21 +1641,10 @@ 
             if (pts > now)
                 usleep(pts - now);
         }
-#if CONFIG_AVFILTER
-        frame_available = ist->st->codec->codec_type != AVMEDIA_TYPE_VIDEO ||
-            !ist->output_video_filter || avfilter_poll_frame(ist->output_video_filter->inputs[0]);
-#endif
         /* if output time reached then transcode raw format,
            encode packets and output them */
         if (start_time == 0 || ist->pts >= start_time)
-#if CONFIG_AVFILTER
-        while (frame_available) {
-            AVRational ist_pts_tb;
-            if (ist->st->codec->codec_type == AVMEDIA_TYPE_VIDEO && ist->output_video_filter)
-                get_filtered_video_frame(ist->output_video_filter, &picture, &ist->picref, &ist_pts_tb);
-            if (ist->picref)
-                ist->pts = av_rescale_q(ist->picref->pts, ist_pts_tb, AV_TIME_BASE_Q);
-#endif
+
             for(i=0;i<nb_ostreams;i++) {
                 int frame_size;
 
@@ -1668,10 +1663,18 @@ 
                             break;
                         case AVMEDIA_TYPE_VIDEO:
 #if CONFIG_AVFILTER
-                            if (ist->picref->video)
-                                ost->st->codec->sample_aspect_ratio = ist->picref->video->pixel_aspect;
+                            if (ist->st->codec->codec_type == AVMEDIA_TYPE_VIDEO && ost->output_video_filter)
+                                get_filtered_video_frame(ost->output_video_filter, &picture, &ost->picref, &ist_pts_tb);
+                            if (ost->picref)
+                                ist->pts = av_rescale_q(ost->picref->pts, ist_pts_tb, AV_TIME_BASE_Q);
+                            if (ost->picref->video)
+                                ost->st->codec->sample_aspect_ratio = ost->picref->video->pixel_aspect;
 #endif
                             do_video_out(os, ost, ist, &picture, &frame_size);
+#if CONFIG_AVFILTER
+                            if(ost->picref)
+                                avfilter_unref_buffer(ost->picref);
+#endif
                             if (vstats_filename && frame_size)
                                 do_video_stats(os, ost, frame_size);
                             break;
@@ -1741,13 +1744,6 @@ 
                 }
             }
 
-#if CONFIG_AVFILTER
-            frame_available = (ist->st->codec->codec_type == AVMEDIA_TYPE_VIDEO) &&
-                              ist->output_video_filter && avfilter_poll_frame(ist->output_video_filter->inputs[0]);
-            if(ist->picref)
-                avfilter_unref_buffer(ist->picref);
-        }
-#endif
         av_free(buffer_to_free);
         /* XXX: allocate the subtitles in the codec ? */
         if (subtitle_to_free) {