Recherche avancée

Médias (91)

Autres articles (22)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

Sur d’autres sites (3949)

  • FFmpeg image to video conversion error : [image2] Opening file for reading

    14 février 2018, par user1690179

    I am running ffmpeg on an AWS Lambda instance. The Lambda function takes an input image and transcodes it into a video segment using ffmpeg :

    ffmpeg -loop 1 -i /tmp/photo-SNRUR7ZS13.jpg -c:v libx264 -t 7.00 -pix_fmt yuv420p -vf scale=1280x720 /tmp/output.mp4

    I am seeing inconsistent behavior where sometimes the output video is shorter than the specified duration. This happens inconsistently to random images. The same exact image sometimes renders correctly, and sometimes is cut short.

    This behavior only happens on Lambda. I am not able to replicate this on my local computer, or on a dedicated EC2 instance with the same environment that runs on lambda.

    I noticed that when the output video is short, the ffmpeg log is different. The main difference are repeated [image2 @ 0x4b11140] Opening '/tmp/photo-2HD2Z3UN3W.jpg' for reading lines. See ffmpeg logs below.

    Normal execution with the correct output video length :

       ffmpeg -loop 1 -i /tmp/photo-SNRUR7ZS13.jpg -c:v libx264 -t 7.00 -pix_fmt yuv420p -vf scale=1280x720 /tmp/video-TMB6RNO0EE.mp4
    ffmpeg version N-89773-g7fcbebbeaf-static https://johnvansickle.com/ffmpeg/  Copyright (c) 2000-2018 the FFmpeg developers
     built with gcc 6.4.0 (Debian 6.4.0-11) 20171206
     configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gray --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-librtmp --enable-libsoxr --enable-libspeex --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimg
     libavutil      56.  7.100 / 56.  7.100
     libavcodec     58.  9.100 / 58.  9.100
     libavformat    58.  3.100 / 58.  3.100
     libavdevice    58.  0.100 / 58.  0.100
     libavfilter     7. 11.101 /  7. 11.101
     libswscale      5.  0.101 /  5.  0.101
     libswresample   3.  0.101 /  3.  0.101
     libpostproc    55.  0.100 / 55.  0.100
    Input #0, image2, from '/tmp/photo-SNRUR7ZS13.jpg':
     Duration: 00:00:00.04, start: 0.000000, bitrate: 18703 kb/s
       Stream #0:0: Video: mjpeg, yuvj444p(pc, bt470bg/unknown/unknown), 687x860 [SAR 200:200 DAR 687:860], 25 fps, 25 tbr, 25 tbn, 25 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (mjpeg (native) -> h264 (libx264))
    Press [q] to stop, [?] for help
    [swscaler @ 0x5837900] deprecated pixel format used, make sure you did set range correctly
    [libx264 @ 0x51c2340] using SAR=1477/3287
    [libx264 @ 0x51c2340] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
    [libx264 @ 0x51c2340] profile High, level 3.1
    [libx264 @ 0x51c2340] 264 - core 155 r61 b00bcaf - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to '/tmp/video-TMB6RNO0EE.mp4':
     Metadata:
       encoder         : Lavf58.3.100
       Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 6183:13760 DAR 687:860], q=-1--1, 25 fps, 12800 tbn, 25 tbc
       Metadata:
         encoder         : Lavc58.9.100 libx264
       Side data:
         cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
    frame=   49 fps=0.0 q=28.0 size=       0kB time=-00:00:00.03 bitrate=N/A speed=N/A    
    frame=   69 fps= 66 q=28.0 size=       0kB time=00:00:00.76 bitrate=   0.5kbits/s speed=0.728x    
    frame=   89 fps= 57 q=28.0 size=       0kB time=00:00:01.56 bitrate=   0.2kbits/s speed=0.998x    
    frame=  109 fps= 53 q=28.0 size=       0kB time=00:00:02.36 bitrate=   0.2kbits/s speed=1.14x    
    frame=  129 fps= 50 q=28.0 size=       0kB time=00:00:03.16 bitrate=   0.1kbits/s speed=1.22x    
    frame=  148 fps= 48 q=28.0 size=       0kB time=00:00:03.92 bitrate=   0.1kbits/s speed=1.27x    
    frame=  168 fps= 47 q=28.0 size=       0kB time=00:00:04.72 bitrate=   0.1kbits/s speed=1.31x    
    No more output streams to write to, finishing.
    frame=  175 fps= 39 q=-1.0 Lsize=      94kB time=00:00:06.88 bitrate= 112.2kbits/s speed=1.54x    
    video:91kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 3.161261%
    Input file #0 (/tmp/photo-SNRUR7ZS13.jpg):
     Input stream #0:0 (video): 176 packets read (16459168 bytes); 176 frames decoded;
     Total: 176 packets (16459168 bytes) demuxed
    Output file #0 (/tmp/video-TMB6RNO0EE.mp4):
     Output stream #0:0 (video): 175 frames encoded; 175 packets muxed (93507 bytes);
     Total: 175 packets (93507 bytes) muxed
    [libx264 @ 0x51c2340] frame I:1     Avg QP:14.33  size: 73084
    [libx264 @ 0x51c2340] frame P:44    Avg QP:14.09  size:   302
    [libx264 @ 0x51c2340] frame B:130   Avg QP:23.31  size:    50
    [libx264 @ 0x51c2340] consecutive B-frames:  0.6%  1.1%  0.0% 98.3%
    [libx264 @ 0x51c2340] mb I  I16..4:  3.3% 84.5% 12.1%
    [libx264 @ 0x51c2340] mb P  I16..4:  0.0%  0.0%  0.0%  P16..4:  3.2%  0.1%  0.0%  0.0%  0.0%    skip:96.7%
    [libx264 @ 0x51c2340] mb B  I16..4:  0.0%  0.0%  0.0%  B16..8:  0.4%  0.0%  0.0%  direct: 0.0%  skip:99.6%  L0:31.2% L1:68.8% BI: 0.0%
    [libx264 @ 0x51c2340] 8x8 transform intra:84.5% inter:98.8%
    [libx264 @ 0x51c2340] coded y,uvDC,uvAC intra: 95.1% 63.9% 51.6% inter: 0.1% 0.6% 0.0%
    [libx264 @ 0x51c2340] i16 v,h,dc,p: 26% 21%  4% 49%
    [libx264 @ 0x51c2340] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 27% 21%  3%  5%  6%  6%  4%  9%
    [libx264 @ 0x51c2340] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 36% 10%  4%  7%  5%  6%  2%  6%
    [libx264 @ 0x51c2340] i8c dc,h,v,p: 51% 29% 16%  4%
    [libx264 @ 0x51c2340] Weighted P-Frames: Y:0.0% UV:0.0%
    [libx264 @ 0x51c2340] ref P L0: 96.5%  0.0%  3.3%  0.2%
    [libx264 @ 0x51c2340] ref B L0: 42.4% 57.6%
    [libx264 @ 0x51c2340] ref B L1: 97.0%  3.0%
    [libx264 @ 0x51c2340] kb/s:106.08

    Log from a short video :

    ffmpeg -framerate 25 -y -loop 1 -i /tmp/photo-2HD2Z3UN3W.jpg -t 15.00 -filter_complex "[0:v]crop=h=ih:w='if(gt(a,16/9),ih*16/9,iw)':y=0:x='if(gt(a,16/9),(ow-iw)/2,0)'[tmp];[tmp]scale=-1:4000,crop=w=iw:h='min(iw*9/16,ih)':x=0:y='0.17*ih-((t/15.00)*min(0.17*ih,(ih-oh)/6))',trim=duration=15.00[tmp1];[tmp1]zoompan=z='if(lte(pzoom,1.0),1.15,max(1.0,pzoom-0.0005))':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=1,setsar=sar=1:1[animated];[animated]fade=out:st=12.00:d=3.00:c=#000000[animated]" -map "[animated]" -pix_fmt yuv420p -s 1280x720 -y /tmp/video-QB1JCDT021.mp4
    ffmpeg version N-89773-g7fcbebbeaf-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2018 the FFmpeg developers
    built with gcc 6.4.0 (Debian 6.4.0-11) 20171206
    configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gray --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-librtmp --enable-libsoxr --enable-libspeex --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimg
    libavutil 56. 7.100 / 56. 7.100
    libavcodec 58. 9.100 / 58. 9.100
    libavformat 58. 3.100 / 58. 3.100
    libavdevice 58. 0.100 / 58. 0.100
    libavfilter 7. 11.101 / 7. 11.101
    libswscale 5. 0.101 / 5. 0.101
    libswresample 3. 0.101 / 3. 0.101
    libpostproc 55. 0.100 / 55. 0.100
    Input #0, image2, from '/tmp/photo-2HD2Z3UN3W.jpg':
    Duration: 00:00:00.04, start: 0.000000, bitrate: 373617 kb/s
    Stream #0:0: Video: mjpeg, yuvj444p(pc, bt470bg/unknown/unknown), 1936x2592 [SAR 72:72 DAR 121:162], 25 fps, 25 tbr, 25 tbn, 25 tbc
    Stream mapping:
    Stream #0:0 (mjpeg) -> crop
    fade -> Stream #0:0 (libx264)
    Press [q] to stop, [?] for help
    [swscaler @ 0x4d63b40] deprecated pixel format used, make sure you did set range correctly
    [swscaler @ 0x4df7340] deprecated pixel format used, make sure you did set range correctly
    [swscaler @ 0x50e97c0] deprecated pixel format used, make sure you did set range correctly
    [swscaler @ 0x50e97c0] Warning: data is not aligned! This can lead to a speed loss
    [libx264 @ 0x4b17480] using SAR=1/1
    [libx264 @ 0x4b17480] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
    [libx264 @ 0x4b17480] profile High, level 3.1
    [libx264 @ 0x4b17480] 264 - core 155 r61 b00bcaf - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to '/tmp/video-QB1JCDT021.mp4':
    Metadata:
    encoder : Lavf58.3.100
    Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], q=-1--1, 25 fps, 12800 tbn, 25 tbc
    Metadata:
    encoder : Lavc58.9.100 libx264
    Side data:
    cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
    [swscaler @ 0x5bd0380] deprecated pixel format used, make sure you did set range correctly
    debug=1
    cur_dts is invalid (this is harmless if it occurs once at the start per stream)
    [image2 @ 0x4b11140] Opening '/tmp/photo-2HD2Z3UN3W.jpg' for reading
    [AVIOContext @ 0x4b6ecc0] Statistics: 1868086 bytes read, 0 seeks
    [mjpeg @ 0x4b14940] marker=d8 avail_size_in_buf=1868084
    [mjpeg @ 0x4b14940] marker parser used 0 bytes (0 bits)
    [mjpeg @ 0x4b14940] marker=e0 avail_size_in_buf=1868082
    [mjpeg @ 0x4b14940] marker parser used 16 bytes (128 bits)
    [mjpeg @ 0x4b14940] marker=db avail_size_in_buf=1868064
    [mjpeg @ 0x4b14940] index=0
    [mjpeg @ 0x4b14940] qscale[0]: 0
    [mjpeg @ 0x4b14940] marker parser used 67 bytes (536 bits)
    [mjpeg @ 0x4b14940] marker=db avail_size_in_buf=1867995
    [mjpeg @ 0x4b14940] index=1
    [mjpeg @ 0x4b14940] qscale[1]: 1
    [mjpeg @ 0x4b14940] marker parser used 67 bytes (536 bits)
    [mjpeg @ 0x4b14940] marker=c0 avail_size_in_buf=1867926
    [mjpeg @ 0x4b14940] sof0: picture: 1936x2592
    [mjpeg @ 0x4b14940] component 0 1:1 id: 0 quant:0
    [mjpeg @ 0x4b14940] component 1 1:1 id: 1 quant:1
    [mjpeg @ 0x4b14940] component 2 1:1 id: 2 quant:1
    [mjpeg @ 0x4b14940] pix fmt id 11111100
    [mjpeg @ 0x4b14940] marker parser used 17 bytes (136 bits)
    [mjpeg @ 0x4b14940] marker=c4 avail_size_in_buf=1867907
    [mjpeg @ 0x4b14940] class=0 index=0 nb_codes=11
    [mjpeg @ 0x4b14940] marker parser used 30 bytes (240 bits)
    [mjpeg @ 0x4b14940] marker=c4 avail_size_in_buf=1867875
    [mjpeg @ 0x4b14940] class=1 index=0 nb_codes=242
    [mjpeg @ 0x4b14940] marker parser used 82 bytes (656 bits)
    [mjpeg @ 0x4b14940] marker=c4 avail_size_in_buf=1867791
    [mjpeg @ 0x4b14940] class=0 index=1 nb_codes=8
    [mjpeg @ 0x4b14940] marker parser used 27 bytes (216 bits)
    [mjpeg @ 0x4b14940] marker=c4 avail_size_in_buf=1867762
    [mjpeg @ 0x4b14940] class=1 index=1 nb_codes=241
    [mjpeg @ 0x4b14940] marker parser used 51 bytes (408 bits)
    [mjpeg @ 0x4b14940] escaping removed 7149 bytes
    [mjpeg @ 0x4b14940] marker=da avail_size_in_buf=1867709
    [mjpeg @ 0x4b14940] component: 0
    [mjpeg @ 0x4b14940] component: 1
    [mjpeg @ 0x4b14940] component: 2
    [mjpeg @ 0x4b14940] marker parser used 1860559 bytes (14884468 bits)
    [mjpeg @ 0x4b14940] marker=d9 avail_size_in_buf=0
    [mjpeg @ 0x4b14940] decode frame unused 0 bytes
    [swscaler @ 0x5bd42c0] deprecated pixel format used, make sure you did set range correctly
    cur_dts is invalid (this is harmless if it occurs once at the start per stream)
    [image2 @ 0x4b11140] Opening '/tmp/photo-2HD2Z3UN3W.jpg' for reading
    [AVIOContext @ 0x4b6ecc0] Statistics: 1868086 bytes read, 0 seeks
    [mjpeg @ 0x4b14940] marker=d8 avail_size_in_buf=1868084
    [mjpeg @ 0x4b14940] marker parser used 0 bytes (0 bits)
    [mjpeg @ 0x4b14940] marker=e0 avail_size_in_buf=1868082
    [mjpeg @ 0x4b14940] marker parser used 16 bytes (128 bits)
    [mjpeg @ 0x4b14940] marker=db avail_size_in_buf=1868064
    [mjpeg @ 0x4b14940] index=0
    [mjpeg @ 0x4b14940] qscale[0]: 0
    [mjpeg @ 0x4b14940] marker parser used 67 bytes (536 bits)
    [mjpeg @ 0x4b14940] marker=db avail_size_in_buf=1867995
    [mjpeg @ 0x4b14940] index=1
    [mjpeg @ 0x4b14940] qscale[1]: 1
    [mjpeg @ 0x4b14940] marker parser used 67 bytes (536 bits)
    [mjpeg @ 0x4b14940] marker=c0 avail_size_in_buf=1867926
    [mjpeg @ 0x4b14940] sof0: picture: 1936x2592
    [mjpeg @ 0x4b14940] component 0 1:1 id: 0 quant:0
    [mjpeg @ 0x4b14940] component 1 1:1 id: 1 quant:1
    [mjpeg @ 0x4b14940] component 2 1:1 id: 2 quant:1
    [mjpeg @ 0x4b14940] pix fmt id 11111100
    [mjpeg @ 0x4b14940] marker parser used 17 bytes (136 bits)
    [mjpeg @ 0x4b14940] marker=c4 avail_size_in_buf=1867907
    [mjpeg @ 0x4b14940] class=0 index=0 nb_codes=11
    [mjpeg @ 0x4b14940] marker parser used 30 bytes (240 bits)
    [mjpeg @ 0x4b14940] marker=c4 avail_size_in_buf=1867875
    [mjpeg @ 0x4b14940] class=1 index=0 nb_codes=242
    [mjpeg @ 0x4b14940] marker parser used 82 bytes (656 bits)
    [mjpeg @ 0x4b14940] marker=c4 avail_size_in_buf=1867791
    [mjpeg @ 0x4b14940] class=0 index=1 nb_codes=8
    [mjpeg @ 0x4b14940] marker parser used 27 bytes (216 bits)
    [mjpeg @ 0x4b14940] marker=c4 avail_size_in_buf=1867762
    [mjpeg @ 0x4b14940] class=1 index=1 nb_codes=241
    [mjpeg @ 0x4b14940] marker parser used 51 bytes (408 bits)
    [mjpeg @ 0x4b14940] escaping removed 7149 bytes
    [mjpeg @ 0x4b14940] marker=da avail_size_in_buf=1867709
    [mjpeg @ 0x4b14940] component: 0
    [mjpeg @ 0x4b14940] component: 1
    [mjpeg @ 0x4b14940] component: 2
    [mjpeg @ 0x4b14940] marker parser used 1860559 bytes (14884468 bits)
    [mjpeg @ 0x4b14940] marker=d9 avail_size_in_buf=0
    [mjpeg @ 0x4b14940] decode frame unused 0 bytes
    [swscaler @ 0x5bd8200] deprecated pixel format used, make sure you did set range correctly
    cur_dts is invalid (this is harmless if it occurs once at the start per stream)
    ...
    ...
    ...

    As requested, here is a link to the full log. In this log - ffmpeg renders only 323 out of 375 frames.

    The Opening '/tmp/photo-2HD2Z3UN3W.jpg' segment repeats many many times until it finally renders out a short video. Does anyone have insight into why it keeps opening the image file ? This must have something to do with the underlying issue.

  • How to create video from multiple gallery images store in array flutter ffmpeg

    26 janvier 2023, par Ammara

    I select images from gallery using multi_image_picker in flutter and store all images in array.

    


      try {
   resultList = await 
   MultiImagePicker.pickImages(
    maxImages: 300,
    enableCamera: true,
    selectedAssets: images,
    materialOptions: MaterialOptions(
      actionBarTitle: "Photo Editor and Video 
   Maker App",
    ),
  );
 }


    


    User can select images from gallery and store in resultlist array.
Now I want to pass this array to ffmpeg to create video from all these images.

    


    I try a lot and search almost all sites but fail. Here is my code.

    


    Future<void> ConvertImageToVideo () async{&#xA;const String BASE_PATH = &#x27;/storage/emulated/0/Download/&#x27;;&#xA;const String AUDIO_PATH = BASE_PATH &#x2B; &#x27;audiio.mp3&#x27;;&#xA;const String IMAGE_PATH = BASE_PATH &#x2B; &#x27;image002.png&#x27;;&#xA;const String OUTPUT_PATH = BASE_PATH &#x2B; &#x27;output02.mp4&#x27;;&#xA;// final FlutterFFmpeg _flutterFFmpeg = FlutterFFmpeg();&#xA;if(await Permission.storage.request().isGranted){&#xA;  List<asset> resultlist  = <asset>[];&#xA;  String commandToExecute =&#xA;      &#x27;-r 15 -f mp3 -i ${AUDIO_PATH} -f image2 -i ${resultlist} -y ${OUTPUT_PATH}&#x27;;&#xA;  await FFmpegKit.execute(commandToExecute).then((session) async {&#xA;    final returnCode = await session.getReturnCode();&#xA;    final state = await session.getState();&#xA;    if (ReturnCode.isSuccess(returnCode)) {&#xA;      print("ruuning   "&#x2B;state.toString());&#xA;      print("video created " &#x2B; returnCode.toString());&#xA;    } else if (ReturnCode.isCancel(returnCode)) {&#xA;      print("video cancel "  &#x2B; returnCode.toString());&#xA;    } else {&#xA;      print("error " );&#xA;    }&#xA;  });&#xA;}else if (await Permission.storage.isPermanentlyDenied) {&#xA;  openAppSettings();&#xA;}&#xA;</asset></asset></void>

    &#xA;

    }

    &#xA;

  • The neutering of Google Code-In 2011

    23 octobre 2011, par Dark Shikari — development, GCI, google, x264

    Posting this from the Google Summer of Code Mentor Summit, at a session about Google Code-In !

    Google Code-In is the most innovative open-source program I’ve ever seen. It provided a way for students who had never done open source — or never even done programming — to get involved in open source work. It made it easy for people who weren’t sure of their ability, who didn’t know whether they could do open source, to get involved and realize that yes, they too could do amazing work — whether code useful to millions of people, documentation to make the code useful, translations to make it accessible, and more. Hundreds of students had a great experience, learned new things, and many stayed around in open source projects afterwards because they enjoyed it so much !

    x264 benefitted greatly from Google Code-In. Most of the high bit depth assembly code was written through GCI — literally man-weeks of work by an professional developer, done by high-schoolers who had never written assembly before ! Furthermore, we got loads of bugs fixed in ffmpeg/libav, a regression test tool, and more. And best of all, we gained a new developer : Daniel Kang, who is now a student at MIT, an x264 and libav developer, and has gotten paid work applying the skills he learned in Google Code-In !

    Some students in GCI complained about the system being “unfair”. Task difficulties were inconsistent and there were many ways to game the system to get lots of points. Some people complained about Daniel — he was completing a staggering number of tasks, so they must be too easy. Yet many of the other students considered these tasks too hard. I mean, I’m asking high school students to write hundreds of lines of complicated assembly code in one of the world’s most complicated instruction sets, and optimize it to meet extremely strict code-review standards ! Of course, there may have been valid complaints about other projects : I did hear from many students talking about gaming the system and finding the easiest, most “profitable” tasks. Though, with the payout capped at $500, the only prize for gaming the system is a high rank on the points list.

    According to people at the session, in an effort to make GCI more “fair”, Google has decided to change the system. There are two big changes they’re making.

    Firstly, Google is requiring projects to submit tasks on only two dates : the start, and the halfway point. But in Google Code-In, we certainly had no idea at the start what types of tasks would be the most popular — or new ideas that came up over time. Often students would come up with ideas for tasks, which we could then add ! A waterfall-style plan-everything-in-advance model does not work for real-world coding. The halfway point addition may solve this somewhat, but this is still going to dramatically reduce the number of ideas that can be proposed as tasks.

    Secondly, Google is requiring projects to submit at least 5 tasks of each category just to apply. Quality assurance, translation, documentation, coding, outreach, training, user interface, and research. For large projects like Gnome, this is easy : they can certainly come up with 5 for each on such a large, general project. But often for a small, focused project, some of these are completely irrelevant. This rules out a huge number of smaller projects that just don’t have relevant work in all these categories. x264 may be saved here : as we work under the Videolan umbrella, we’ll likely be able to fudge enough tasks from Videolan to cover the gaps. But for hundreds of other organizations, they are going to be out of luck. It would make more sense to require, say, 5 out of 8 of the categories, to allow some flexibility, while still encouraging interesting non-coding tasks.

    For example, what’s “user interface” for a software library with a stable API, say, a libc ? Can you make 5 tasks out of it that are actually useful ?

    If x264 applied on its own, could you come up with 5 real, meaningful tasks in each category for it ? It might be possible, but it’d require a lot of stretching.

    How many smaller or more-focused projects do you think are going to give up and not apply because of this ?

    Is GCI supposed to be something for everyone, or just or Gnome, KDE, and other megaprojects ?