
Recherche avancée
Autres articles (41)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community. -
Qu’est ce qu’un éditorial
21 juin 2013, parEcrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
Vous pouvez personnaliser le formulaire de création d’un éditorial.
Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...)
Sur d’autres sites (8639)
-
Error of "Built target opencv_imgproc" while compiling opencv2
23 mars 2017, par HongFollowing https://github.com/menpo/conda-opencv3, while I compile opencv, there is following error (please read the error at the end of the post). The only change I made is to enable ffmpeg by modifying "-DWITH_FFMPEG=1" in BUILD.SH. Any suggestion ?
$conda build conda/
BUILD START: opencv3-3.1.0-py27_0
updating index in: /home/cocadas/anaconda2/conda-bld/linux-64
updating index in: /home/cocadas/anaconda2/conda-bld/noarch
The following NEW packages will be INSTALLED:
bzip2: 1.0.6-3
cmake: 3.6.3-0
curl: 7.52.1-0
eigen: 3.2.7-0 menpo
expat: 2.1.0-0
mkl: 2017.0.1-0
ncurses: 5.9-10
numpy: 1.12.1-py27_0
openssl: 1.0.2k-1
pip: 9.0.1-py27_1
python: 2.7.13-0
readline: 6.2-2
setuptools: 27.2.0-py27_0
sqlite: 3.13.0-0
tk: 8.5.18-0
wheel: 0.29.0-py27_0
xz: 5.2.2-1
zlib: 1.2.8-3
Source cache directory is: /home/cocadas/anaconda2/conda-bld/src_cache
Found source in cache: opencv-3.1.0.tar.gz
Extracting download
Applying patch: u'/home/cocadas/conda-opencv3/conda/no_rpath.patch'
patching file CMakeLists.txt
patch unexpectedly ends in middle of line
Hunk #1 succeeded at 397 with fuzz 1 (offset 11 lines).
Package: opencv3-3.1.0-py27_0
source tree in: /home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0
source /home/cocadas/anaconda2/bin/activate /home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/_b_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pl
mkdir build
cd build
CMAKE_GENERATOR='Unix Makefiles'
CMAKE_ARCH=-m64
++ uname -s
SHORT_OS_STR=Linux
'[' Linux == Linux ']'
DYNAMIC_EXT=so
TBB=
OPENMP=-DWITH_OPENMP=1
IS_OSX=0
-- 3rdparty dependencies: zlib libjpeg libwebp libpng libtiff libjasper IlmImf
--
-- OpenCV modules:
-- To be built: core flann hdf imgproc ml photo reg surface_matching video dnn fuzzy imgcodecs shape videoio highgui objdetect plot superres xobjdetect xphoto bgsegm bioinspired dpm face features2d line_descriptor saliency text calib3d ccalib datasets rgbd stereo structured_light tracking videostab xfeatures2d ximgproc aruco optflow sfm stitching python2
-- Disabled: world contrib_world
-- Disabled by dependency: -
-- Unavailable: cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev java python3 ts viz cvv matlab
--
-- GUI:
-- QT: NO
-- GTK+ 3.x: YES (ver 3.18.9)
-- GThread : YES (ver 2.48.2)
-- GtkGlExt: NO
-- OpenGL support: NO
-- VTK support: NO
--
-- Media I/O:
-- ZLib: build (ver 1.2.8)
-- JPEG: build (ver 90)
-- WEBP: build (ver 0.3.1)
-- PNG: build (ver 1.6.19)
-- TIFF: build (ver 42 - 4.0.2)
-- JPEG 2000: build (ver 1.900.1)
-- OpenEXR: build (ver 1.7.1)
-- GDAL: NO
--
-- Video I/O:
-- DC1394 1.x: NO
-- DC1394 2.x: YES (ver 2.2.4)
-- FFMPEG: YES
-- codec: YES (ver 56.60.100)
-- format: YES (ver 56.40.101)
-- util: YES (ver 54.31.100)
-- swscale: YES (ver 3.1.101)
-- resample: NO
-- gentoo-style: YES
-- GStreamer: NO
-- OpenNI: NO
-- OpenNI PrimeSensor Modules: NO
-- OpenNI2: NO
-- PvAPI: NO
-- GigEVisionSDK: NO
-- UniCap: NO
-- UniCap ucil: NO
-- V4L/V4L2: Using libv4l1 (ver 1.10.0) / libv4l2 (ver 1.10.0)
-- XIMEA: NO
-- Xine: NO
-- gPhoto2: NO
--
-- Parallel framework: OpenMP
--
-- Other third-party libraries:
-- Use IPP: 9.0.1 [9.0.1]
-- at: /home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/3rdparty/ippicv/unpack/ippicv_lnx
-- Use IPP Async: NO
-- Use VA: NO
-- Use Intel VA-API/OpenCL: NO
-- Use Eigen: YES (ver 3.2.7)
-- Use Cuda: NO
-- Use OpenCL: NO
-- Use custom HAL: NO
--
-- Python 2:
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:554:22: error: ‘H5Tclose’ was not declared in this scope
H5Tclose( dstype );
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:555:22: error: ‘H5Sclose’ was not declared in this scope
H5Sclose( dspace );
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:557:22: error: ‘H5Dclose’ was not declared in this scope
H5Dclose( dsdata );
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp: At global scope:
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:466:50: warning: unused parameter ‘dslabel’ [-Wunused-parameter]
void HDF5Impl::dsread( OutputArray Array, String dslabel,
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp: In member function ‘virtual void cv::hdf::HDF5Impl::dswrite(cv::InputArray, cv::String, const int*, const int*) const’:
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:583:5: error: ‘hsize_t’ was not declared in this scope
hsize_t dsdims[n_dims];
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:584:13: error: expected ‘;’ before ‘offset’
hsize_t offset[n_dims];
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:588:7: error: ‘offset’ was not declared in this scope
offset[d] = 0;
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:590:7: error: ‘dsdims’ was not declared in this scope
dsdims[d] = matrix.size[d];
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:601:9: error: ‘dsdims’ was not declared in this scope
dsdims[d] = dims_counts[d];
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:605:5: error: ‘hid_t’ was not declared in this scope
hid_t dsdata = H5Dopen( m_h5_file_id, dslabel.c_str(), H5P_DEFAULT );
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:608:11: error: expected ‘;’ before ‘dspace’
hid_t dspace = H5Screate_simple( n_dims, dsdims, NULL );
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:614:9: error: ‘offset’ was not declared in this scope
offset[d] = dims_offset[d];
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:618:11: error: expected ‘;’ before ‘fspace’
hid_t fspace = H5Dget_space( dsdata );
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:619:26: error: ‘fspace’ was not declared in this scope
H5Sselect_hyperslab( fspace, H5S_SELECT_SET,
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:619:34: error: ‘H5S_SELECT_SET’ was not declared in this scope
H5Sselect_hyperslab( fspace, H5S_SELECT_SET,
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:620:26: error: ‘offset’ was not declared in this scope
offset, NULL, dsdims, NULL );
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:620:40: error: ‘dsdims’ was not declared in this scope
offset, NULL, dsdims, NULL );
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:620:53: error: ‘H5Sselect_hyperslab’ was not declared in this scope
offset, NULL, dsdims, NULL );
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:623:11: error: expected ‘;’ before ‘dstype’
hid_t dstype = GetH5type( matrix.type() );
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:628:15: error: expected ‘;’ before ‘adims’
hsize_t adims[1] = { channs };
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:629:7: error: ‘dstype’ was not declared in this scope
dstype = H5Tarray_create( dstype, 1, adims );
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:629:44: error: ‘adims’ was not declared in this scope
dstype = H5Tarray_create( dstype, 1, adims );
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:629:50: error: ‘H5Tarray_create’ was not declared in this scope
dstype = H5Tarray_create( dstype, 1, adims );
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:633:15: error: ‘dsdata’ was not declared in this scope
H5Dwrite( dsdata, dstype, dspace, fspace,
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:633:23: error: ‘dstype’ was not declared in this scope
H5Dwrite( dsdata, dstype, dspace, fspace,
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:633:31: error: ‘dspace’ was not declared in this scope
H5Dwrite( dsdata, dstype, dspace, fspace,
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:634:15: error: ‘H5P_DEFAULT’ was not declared in this scope
H5P_DEFAULT, matrix.data );
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:634:40: error: ‘H5Dwrite’ was not declared in this scope
H5P_DEFAULT, matrix.data );
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:637:24: error: ‘H5Tclose’ was not declared in this scope
H5Tclose( dstype );
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:639:22: error: ‘H5Sclose’ was not declared in this scope
H5Sclose( dspace );
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:641:22: error: ‘H5Dclose’ was not declared in this scope
H5Dclose( dsdata );
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:580:9: warning: unused variable ‘channs’ [-Wunused-variable]
int channs = matrix.channels();
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp: In member function ‘virtual void cv::hdf::HDF5Impl::dsinsert(cv::InputArray, cv::String, const int*, const int*) const’:
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:670:5: error: ‘hsize_t’ was not declared in this scope
hsize_t dsdims[n_dims];
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:671:13: error: expected ‘;’ before ‘offset’
hsize_t offset[n_dims];
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:675:7: error: ‘offset’ was not declared in this scope
offset[d] = 0;
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:676:7: error: ‘dsdims’ was not declared in this scope
dsdims[d] = matrix.size[d];......
hsize_t foffset[1] = 0 ;
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:1022:11 : error : expected ‘ ;’ before ‘dspace’
hid_t dspace = H5Screate_simple( 1, dsddims, NULL ) ;
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:1025:26 : error : ‘dspace’ was not declared in this scope
H5Sselect_hyperslab( dspace, H5S_SELECT_SET,
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:1025:34 : error : ‘H5S_SELECT_SET’ was not declared in this scope
H5Sselect_hyperslab( dspace, H5S_SELECT_SET,
^
/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/build/opencv_contrib/modules/hdf/src/hdf5.cpp:1026:26 : error : ‘foffset’ was not declared in this scope
foffset, NULL, dsddims, NULL ) ;[ 57%] Building CXX object modules/ml/CMakeFiles/opencv_ml.dir/src/svm.cpp.o
[ 57%] Building CXX object modules/ml/CMakeFiles/opencv_ml.dir/src/testset.cpp.o
[ 57%] Building CXX object modules/ml/CMakeFiles/opencv_ml.dir/src/tree.cpp.o
[ 57%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/approx.cpp.o
[ 57%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/blend.cpp.o
[ 57%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/canny.cpp.o
[ 57%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/clahe.cpp.o
[ 57%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/color.cpp.o
[ 57%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/colormap.cpp.o
[ 57%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/connectedcomponents.cpp.o
[ 57%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/contours.cpp.o
[ 57%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/convhull.cpp.o
[ 57%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/corner.cpp.o
[ 57%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/cornersubpix.cpp.o
[ 57%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/demosaicing.cpp.o
[ 57%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/deriv.cpp.o
[ 59%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/distransform.cpp.o
[ 59%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/drawing.cpp.o
[ 59%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/emd.cpp.o
[ 59%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/featureselect.cpp.o
[ 59%] Linking CXX shared library ../../lib/libopencv_ml.so
[ 59%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/filter.cpp.o
[ 59%] Built target opencv_ml
[ 59%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/floodfill.cpp.o
[ 59%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/gabor.cpp.o
[ 59%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/generalized_hough.cpp.o
[ 59%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/geometry.cpp.o
[ 59%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/grabcut.cpp.o
[ 59%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/hershey_fonts.cpp.o
[ 59%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/histogram.cpp.o
[ 59%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/hough.cpp.o
[ 59%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/imgwarp.cpp.o
[ 59%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/intersection.cpp.o
[ 60%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/linefit.cpp.o
[ 60%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/lsd.cpp.o
[ 60%] Linking CXX shared library ../../lib/libopencv_flann.so
[ 60%] Built target opencv_flann
[ 60%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/matchcontours.cpp.o
[ 60%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/main.cpp.o
[ 60%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/min_enclosing_triangle.cpp.o
[ 60%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/moments.cpp.o
[ 60%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/morph.cpp.o
[ 60%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/phasecorr.cpp.o
[ 60%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/pyramids.cpp.o
[ 60%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/rotcalipers.cpp.o
[ 60%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/samplers.cpp.o
[ 60%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/segmentation.cpp.o
[ 60%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/shapedescr.cpp.o
[ 60%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/smooth.cpp.o
[ 60%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/spatialgradient.cpp.o
[ 60%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/subdivision2d.cpp.o
[ 62%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/sumpixels.cpp.o
[ 62%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/tables.cpp.o
[ 62%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/templmatch.cpp.o
[ 62%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/thresh.cpp.o
[ 62%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/undistort.cpp.o
[ 62%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/src/utils.cpp.o
[ 62%] Building CXX object modules/imgproc/CMakeFiles/opencv_imgproc.dir/opencl_kernels_imgproc.cpp.o
[ 62%] Linking CXX shared library ../../lib/libopencv_imgproc.so
[ 62%] Built target opencv_imgproc
Makefile:160: recipe for target 'all' failed
make: *** [all] Error 2
Traceback (most recent call last):
File "/home/cocadas/anaconda2/bin/conda-build", line 6, in
sys.exit(conda_build.cli.main_build.main())
File "/home/cocadas/anaconda2/lib/python2.7/site-packages/conda_build/cli/main_build.py", line 334, in main
execute(sys.argv[1:])
File "/home/cocadas/anaconda2/lib/python2.7/site-packages/conda_build/cli/main_build.py", line 325, in execute
noverify=args.no_verify)
File "/home/cocadas/anaconda2/lib/python2.7/site-packages/conda_build/api.py", line 97, in build
need_source_download=need_source_download, config=config)
File "/home/cocadas/anaconda2/lib/python2.7/site-packages/conda_build/build.py", line 1502, in build_tree
config=config)
File "/home/cocadas/anaconda2/lib/python2.7/site-packages/conda_build/build.py", line 1137, in build
utils.check_call_env(cmd, env=env, cwd=src_dir)
File "/home/cocadas/anaconda2/lib/python2.7/site-packages/conda_build/utils.py", line 616, in check_call_env
return _func_defaulting_env_to_os_environ(subprocess.check_call, *popenargs, **kwargs)
File "/home/cocadas/anaconda2/lib/python2.7/site-packages/conda_build/utils.py", line 612, in _func_defaulting_env_to_os_environ
return func(_args, **kwargs)
File "/home/cocadas/anaconda2/lib/python2.7/subprocess.py", line 186, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/bin/bash', '-x', '-e', '/home/cocadas/anaconda2/conda-bld/opencv3_1490285248642/work/opencv-3.1.0/conda_build.sh']' returned non-zero exit status 2 -
Streaming RTP packets using SDP to ffmpeg
4 avril 2017, par Johnathan KanarekI have RTP packets in node.js server and I want to forward them to ffmpeg.
I generate SDP files in the node.js server side and execute ffmpeg with the SDP as input.SDP :
v=0
o=mediasoup 21881725401d4e8d56cbd79694c7e2b6e6cacb4a 0 IN IP4 192.168.193.182
s=21881725401d4e8d56cbd79694c7e2b6e6cacb4a
c=IN IP4 192.168.193.182
t=0 0
a=group:LS video audio
m=video 33404 RTP/SAVPF 107
a=rtpmap:107 H264/90000
a=fmtp:107 level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=42e01f
a=control:track0
a=rtcp-fb:107 ccm fir
a=rtcp-fb:107 nack
a=rtcp-fb:107 nack pli
a=rtcp-fb:107 goog-remb
a=rtcp-fb:107 transport-cc
a=extmap:2 urn:ietf:params:rtp-hdrext:toffset
a=extmap:3 http://www.webrtc.org/experiments/rtp-hdrext/abs-send-time
a=extmap:4 urn:3gpp:video-orientation
a=extmap:5 http://www.ietf.org/id/draft-holmer-rmcat-transport-wide-cc-extensions-01
a=extmap:6 http://www.webrtc.org/experiments/rtp-hdrext/playout-delay
a=mid:video
a=sendrecv
m=audio 33402 RTP/SAVPF 111
a=rtpmap:111 opus/48000
a=fmtp:111 minptime=10;useinbandfec=1
a=control:track1
a=rtcp-fb:111 transport-cc
a=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level
a=mid:audio
a=sendrecvCommand :
ffmpeg -max_delay 5000 -reorder_queue_size 16384 -protocol_whitelist file,crypto,udp,rtp -re -i input.sdp -vcodec copy -acodec aac -y output.mp4
Same with RTMP
ffmpeg -max_delay 5000 -reorder_queue_size 16384 -protocol_whitelist file,crypto,udp,rtp -re -i input.sdp -vcodec copy -acodec aac -f flv rtmp://127.0.0.1:1935/live/myStream
I get weird video that plays some vidoe, then get stuck, then plays some audio, back to video and so on, it never plays both video and audio together.
BTW, when I created separate SDP files for the video and the audio and stream them as two inputs into the same output, I get valid stream but the audio is not in sync (about a second offset).
ffmpeg -max_delay 5000 -reorder_queue_size 16384 -protocol_whitelist file,crypto,udp,rtp -re -i video.0.sdp -max_delay 5000 -reorder_queue_size 16384 -protocol_whitelist file,crypto,udp,rtp -re -i audio.1.sdp -vcodec copy -acodec aac -shortest -y output.mp4
What is wrong with my SDP ?
I tried changing analyzeduration, probesize, rtbufsize, vsync, framerate,
I even tried to remap the streams using -map 0:v -map 0:a,
nothing helpedI also tried to use RTSP server, see log :
ffmpeg version 3.2 Copyright (c) 2000-2016 the FFmpeg developers
built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-11)
configuration: --prefix=/opt/kaltura/ffmpeg-3.2 --libdir=/opt/kaltura/ffmpeg-3.2/lib --shlibdir=/opt/kaltura/ffmpeg-3.2/lib --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC -I/opt/kaltura/include' --extra-ldflags=-L/opt/kaltura/lib --disable-devices --enable-bzlib --enable-libgsm --enable-libmp3lame --enable-libschroedinger --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libx265 --enable-avisynth --enable-libxvid --enable-filter=movie --enable-avfilter --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libvpx --enable-libspeex --enable-libass --enable-postproc --enable-pthreads --enable-static --enable-shared --enable-gpl --disable-debug --disable-optimizations --enable-gpl --enable-pthreads --enable-swscale --enable-vdpau --enable-bzlib --disable-devices --enable-filter=movie --enable-version3 --enable-indev=lavfi --enable-x11grab
libavutil 55. 34.100 / 55. 34.100
libavcodec 57. 64.100 / 57. 64.100
libavformat 57. 56.100 / 57. 56.100
libavdevice 57. 1.100 / 57. 1.100
libavfilter 6. 65.100 / 6. 65.100
libswscale 4. 2.100 / 4. 2.100
libswresample 2. 3.100 / 2. 3.100
libpostproc 54. 1.100 / 54. 1.100
Splitting the commandline.
Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument 'debug'.
Reading option '-max_delay' ... matched as AVOption 'max_delay' with argument '500000'.
Reading option '-reorder_queue_size' ... matched as AVOption 'reorder_queue_size' with argument '16384'.
Reading option '-analyzeduration' ... matched as AVOption 'analyzeduration' with argument '2147483647'.
Reading option '-probesize' ... matched as AVOption 'probesize' with argument '2147483647'.
Reading option '-protocol_whitelist' ... matched as AVOption 'protocol_whitelist' with argument 'file,crypto,tcp,udp,rtp'.
Reading option '-re' ... matched as option 're' (read input at native frame rate) with argument '1'.
Reading option '-i' ... matched as input file with argument 'rtsp://192.168.193.182:5000/IcL8tHJdU9oWEK3rAAAA.sdp'.
Reading option '-vcodec' ... matched as option 'vcodec' (force video codec ('copy' to copy stream)) with argument 'h264'.
Reading option '-acodec' ... matched as option 'acodec' (force audio codec ('copy' to copy stream)) with argument 'aac'.
Reading option '-max_interleave_delta' ... matched as AVOption 'max_interleave_delta' with argument '30000000'.
Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
Reading option '/opt/mediasoup_sample/recordings/IcL8tHJdU9oWEK3rAAAA.mp4' ... matched as output file.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option loglevel (set logging level) with argument debug.
Applying option y (overwrite output files) with argument 1.
Successfully parsed a group of options.
Parsing a group of options: input file rtsp://192.168.193.182:5000/IcL8tHJdU9oWEK3rAAAA.sdp.
Applying option re (read input at native frame rate) with argument 1.
Successfully parsed a group of options.
Opening an input file: rtsp://192.168.193.182:5000/IcL8tHJdU9oWEK3rAAAA.sdp.
[rtsp @ 0x19b4fa0] SDP:
v=0
o=mediasoup IcL8tHJdU9oWEK3rAAAA 0 IN IP4 192.168.193.182
s=IcL8tHJdU9oWEK3rAAAA
c=IN IP4 192.168.193.182
t=0 0
a=group:LS audio video
m=audio 0 RTP/SAVPF 111
a=rtpmap:111 opus/48000
a=fmtp:111 minptime=10;useinbandfec=1
a=control:streamid=0
a=rtcp-fb:111 transport-cc
a=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level
a=mid:audio
a=sendrecv
a=rtcp-mux
m=video 0 RTP/SAVPF 107
a=rtpmap:107 H264/90000
a=fmtp:107 level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=42e01f
a=control:streamid=1
a=rtcp-fb:107 ccm fir
a=rtcp-fb:107 nack
a=rtcp-fb:107 nack pli
a=rtcp-fb:107 goog-remb
a=rtcp-fb:107 transport-cc
a=extmap:2 urn:ietf:params:rtp-hdrext:toffset
a=extmap:3 http://www.webrtc.org/experiments/rtp-hdrext/abs-send-time
a=extmap:4 urn:3gpp:video-orientation
a=extmap:5 http://www.ietf.org/id/draft-holmer-rmcat-transport-wide-cc-extensions-01
a=extmap:6 http://www.webrtc.org/experiments/rtp-hdrext/playout-delay
a=mid:video
a=sendrecv
a=rtcp-mux
[rtsp @ 0x19b4fa0] audio codec set to: opus
[rtsp @ 0x19b4fa0] audio samplerate set to: 48000
[rtsp @ 0x19b4fa0] audio channels set to: 1
[rtsp @ 0x19b4fa0] video codec set to: h264
[rtsp @ 0x19b4fa0] RTP Packetization Mode: 1
[rtsp @ 0x19b4fa0] RTP Profile IDC: 42 Profile IOP: e0 Level: 1f
[udp @ 0x19b5d60] end receive buffer size reported is 131072
[udp @ 0x19ba020] end receive buffer size reported is 131072
[rtsp @ 0x19b4fa0] setting jitter buffer size to 16384
[udp @ 0x19b7a00] end receive buffer size reported is 131072
[udp @ 0x19daca0] end receive buffer size reported is 131072
[rtsp @ 0x19b4fa0] setting jitter buffer size to 16384
[rtsp @ 0x19b4fa0] hello state=0
[h264 @ 0x19b9ac0] non-existing PPS 0 referenced
[h264 @ 0x19b9ac0] nal_unit_type: 5, nal_ref_idc: 3
Last message repeated 3 times
[h264 @ 0x19b9ac0] non-existing PPS 0 referenced
[h264 @ 0x19b9ac0] decode_slice_header error
[h264 @ 0x19b9ac0] non-existing PPS 0 referenced
[h264 @ 0x19b9ac0] decode_slice_header error
[h264 @ 0x19b9ac0] non-existing PPS 0 referenced
[h264 @ 0x19b9ac0] decode_slice_header error
[h264 @ 0x19b9ac0] non-existing PPS 0 referenced
[h264 @ 0x19b9ac0] decode_slice_header error
[h264 @ 0x19b9ac0] no frame!
[h264 @ 0x19b9ac0] non-existing PPS 0 referenced
[h264 @ 0x19b9ac0] nal_unit_type: 9, nal_ref_idc: 0
[h264 @ 0x19b9ac0] nal_unit_type: 1, nal_ref_idc: 3
Last message repeated 3 times
[h264 @ 0x19b9ac0] non-existing PPS 0 referenced
[h264 @ 0x19b9ac0] decode_slice_header error
[h264 @ 0x19b9ac0] non-existing PPS 0 referenced
[h264 @ 0x19b9ac0] decode_slice_header error
[h264 @ 0x19b9ac0] non-existing PPS 0 referenced
[h264 @ 0x19b9ac0] decode_slice_header error
[h264 @ 0x19b9ac0] non-existing PPS 0 referenced
[h264 @ 0x19b9ac0] decode_slice_header error
[h264 @ 0x19b9ac0] no frame!
[h264 @ 0x19b9ac0] non-existing PPS 0 referenced
[h264 @ 0x19b9ac0] nal_unit_type: 9, nal_ref_idc: 0
[h264 @ 0x19b9ac0] nal_unit_type: 1, nal_ref_idc: 3
Last message repeated 3 times
... a lot of the same ...
[h264 @ 0x19b9ac0] non-existing PPS 0 referenced
[h264 @ 0x19b9ac0] decode_slice_header error
[h264 @ 0x19b9ac0] non-existing PPS 0 referenced
[h264 @ 0x19b9ac0] decode_slice_header error
[h264 @ 0x19b9ac0] non-existing PPS 0 referenced
[h264 @ 0x19b9ac0] decode_slice_header error
[h264 @ 0x19b9ac0] non-existing PPS 0 referenced
[h264 @ 0x19b9ac0] decode_slice_header error
[h264 @ 0x19b9ac0] no frame!
[h264 @ 0x19b9ac0] nal_unit_type: 9, nal_ref_idc: 0
[h264 @ 0x19b9ac0] nal_unit_type: 7, nal_ref_idc: 3
[h264 @ 0x19b9ac0] nal_unit_type: 8, nal_ref_idc: 3
[h264 @ 0x19b9ac0] nal_unit_type: 5, nal_ref_idc: 3
Last message repeated 3 times
[h264 @ 0x19b9ac0] Reinit context to 640x480, pix_fmt: yuv420p
[h264 @ 0x19b9ac0] nal_unit_type: 9, nal_ref_idc: 0
[h264 @ 0x19b9ac0] nal_unit_type: 1, nal_ref_idc: 3
Last message repeated 3 times
[h264 @ 0x19b9ac0] nal_unit_type: 9, nal_ref_idc: 0
[h264 @ 0x19b9ac0] nal_unit_type: 1, nal_ref_idc: 3
Last message repeated 3 times
[h264 @ 0x19b9ac0] nal_unit_type: 9, nal_ref_idc: 0
[h264 @ 0x19b9ac0] nal_unit_type: 1, nal_ref_idc: 3
Last message repeated 3 times
[h264 @ 0x19b9ac0] nal_unit_type: 9, nal_ref_idc: 0
[h264 @ 0x19b9ac0] nal_unit_type: 1, nal_ref_idc: 3
Last message repeated 3 times
[h264 @ 0x19b9ac0] nal_unit_type: 9, nal_ref_idc: 0
[h264 @ 0x19b9ac0] nal_unit_type: 1, nal_ref_idc: 3
Last message repeated 3 times
[h264 @ 0x19b9ac0] nal_unit_type: 9, nal_ref_idc: 0
[h264 @ 0x19b9ac0] nal_unit_type: 1, nal_ref_idc: 3
Last message repeated 3 times
[rtsp @ 0x19b4fa0] All info found
Input #0, rtsp, from 'rtsp://192.168.193.182:5000/IcL8tHJdU9oWEK3rAAAA.sdp':
Metadata:
title : IcL8tHJdU9oWEK3rAAAA
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0, 146, 1/48000: Audio: opus, 48000 Hz, mono, fltp
Stream #0:1, 88, 1/90000: Video: h264 (Constrained Baseline), 1 reference frame, yuv420p(progressive, left), 640x480, 0/1, 30 fps, 30 tbr, 90k tbn, 60 tbc
Successfully opened the file.
Parsing a group of options: output file /opt/mediasoup_sample/recordings/IcL8tHJdU9oWEK3rAAAA.mp4.
Applying option vcodec (force video codec ('copy' to copy stream)) with argument h264.
Applying option acodec (force audio codec ('copy' to copy stream)) with argument aac.
Successfully parsed a group of options.
Opening an output file: /opt/mediasoup_sample/recordings/IcL8tHJdU9oWEK3rAAAA.mp4.
Matched encoder 'libx264' for codec 'h264'.
[file @ 0x1b7bb80] Setting default whitelist 'file,crypto'
Successfully opened the file.
detected 1 logical cores
[graph 0 input from stream 0:1 @ 0x1b788c0] Setting 'video_size' to value '640x480'
[graph 0 input from stream 0:1 @ 0x1b788c0] Setting 'pix_fmt' to value '0'
[graph 0 input from stream 0:1 @ 0x1b788c0] Setting 'time_base' to value '1/90000'
[graph 0 input from stream 0:1 @ 0x1b788c0] Setting 'pixel_aspect' to value '0/1'
[graph 0 input from stream 0:1 @ 0x1b788c0] Setting 'sws_param' to value 'flags=2'
[graph 0 input from stream 0:1 @ 0x1b788c0] Setting 'frame_rate' to value '30/1'
[graph 0 input from stream 0:1 @ 0x1b788c0] w:640 h:480 pixfmt:yuv420p tb:1/90000 fr:30/1 sar:0/1 sws_param:flags=2
[format @ 0x1a78e00] compat: called with args=[yuv420p|yuvj420p|yuv422p|yuvj422p|yuv444p|yuvj444p|nv12|nv16]
[format @ 0x1a78e00] Setting 'pix_fmts' to value 'yuv420p|yuvj420p|yuv422p|yuvj422p|yuv444p|yuvj444p|nv12|nv16'
[AVFilterGraph @ 0x19ba180] query_formats: 4 queried, 3 merged, 0 already done, 0 delayed
[graph 1 input from stream 0:0 @ 0x1b89ae0] Setting 'time_base' to value '1/48000'
[graph 1 input from stream 0:0 @ 0x1b89ae0] Setting 'sample_rate' to value '48000'
[graph 1 input from stream 0:0 @ 0x1b89ae0] Setting 'sample_fmt' to value 'fltp'
[graph 1 input from stream 0:0 @ 0x1b89ae0] Setting 'channel_layout' to value '0x4'
[graph 1 input from stream 0:0 @ 0x1b89ae0] tb:1/48000 samplefmt:fltp samplerate:48000 chlayout:0x4
[audio format for output stream 0:1 @ 0x1a7aa00] Setting 'sample_fmts' to value 'fltp'
[audio format for output stream 0:1 @ 0x1a7aa00] Setting 'sample_rates' to value '96000|88200|64000|48000|44100|32000|24000|22050|16000|12000|11025|8000|7350'
[AVFilterGraph @ 0x1a7a6e0] query_formats: 4 queried, 9 merged, 0 already done, 0 delayed
[h264 @ 0x1b779a0] nal_unit_type: 9, nal_ref_idc: 0
[h264 @ 0x1b779a0] nal_unit_type: 7, nal_ref_idc: 3
[h264 @ 0x1b779a0] nal_unit_type: 8, nal_ref_idc: 3
[h264 @ 0x1b779a0] Ignoring NAL type 9 in extradata
[libx264 @ 0x1a6b5e0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[libx264 @ 0x1a6b5e0] profile High, level 3.0
[libx264 @ 0x1a6b5e0] 264 - core 140 - H.264/MPEG-4 AVC codec - Copyleft 2003-2013 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=1 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to '/opt/mediasoup_sample/recordings/IcL8tHJdU9oWEK3rAAAA.mp4':
Metadata:
title : IcL8tHJdU9oWEK3rAAAA
encoder : Lavf57.56.100
Stream #0:0, 0, 1/15360: Video: h264 (libx264), 1 reference frame ([33][0][0][0] / 0x0021), yuv420p(left), 640x480, 0/1, q=-1--1, 30 fps, 15360 tbn, 30 tbc
Metadata:
encoder : Lavc57.64.100 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
Stream #0:1, 0, 1/48000: Audio: aac (LC) ([64][0][0][0] / 0x0040), 48000 Hz, mono, fltp, delay 1024, 69 kb/s
Metadata:
encoder : Lavc57.64.100 aac
Stream mapping:
Stream #0:1 -> #0:0 (h264 (native) -> h264 (libx264))
Stream #0:0 -> #0:1 (opus (native) -> aac (native))
Press [q] to stop, [?] for help
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Last message repeated 1 times
[SWR @ 0x1af80a0] Using fltp internally between filters
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
Last message repeated 48 times
[h264 @ 0x1b779a0] nal_unit_type: 5, nal_ref_idc: 3
Last message repeated 3 times
[h264 @ 0x1b779a0] Reinit context to 640x480, pix_fmt: yuv420p
*** 67 dup!
[libx264 @ 0x1a6b5e0] frame= 0 QP=16.76 NAL=3 Slice:I Poc:0 I:1200 P:0 SKIP:0 size=29147 bytes
[libx264 @ 0x1a6b5e0] frame= 1 QP=15.49 NAL=2 Slice:P Poc:8 I:1 P:198 SKIP:1001 size=588 bytes
... a lot of the same ...
[libx264 @ 0x1a6b5e0] frame= 25 QP=16.64 NAL=2 Slice:P Poc:56 I:0 P:15 SKIP:1185 size=72 bytes
[libx264 @ 0x1a6b5e0] frame= 26 QP=27.00 NAL=2 Slice:B Poc:52 I:0 P:18 SKIP:1182 size=44 bytes
frame= 68 fps= 38 q=29.0 size= 32kB time=00:00:00.80 bitrate= 332.6kbits/s dup=67 drop=0 speed=0.453x
[h264 @ 0x1b779a0] nal_unit_type: 9, nal_ref_idc: 0
[h264 @ 0x1b779a0] nal_unit_type: 1, nal_ref_idc: 3
Last message repeated 3 times
[h264 @ 0x1b779a0] nal_unit_type: 9, nal_ref_idc: 0
[h264 @ 0x1b779a0] nal_unit_type: 1, nal_ref_idc: 3
Last message repeated 3 times
... a lot of the same ...
*** dropping frame 68 from stream 0 at ts 64
[h264 @ 0x1b779a0] nal_unit_type: 9, nal_ref_idc: 0
[h264 @ 0x1b779a0] nal_unit_type: 1, nal_ref_idc: 3
Last message repeated 3 times
*** dropping frame 68 from stream 0 at ts 65
[libx264 @ 0x1a6b5e0] frame= 27 QP=29.00 NAL=0 Slice:B Poc:50 I:0 P:1 SKIP:1199 size=19 bytes
[h264 @ 0x1b779a0] nal_unit_type: 9, nal_ref_idc: 0
[h264 @ 0x1b779a0] nal_unit_type: 1, nal_ref_idc: 3
Last message repeated 3 times
... a lot of the same ...
[libx264 @ 0x1a6b5e0] frame= 362 QP=24.00 NAL=2 Slice:B Poc:208 I:0 P:6 SKIP:1194 size=30 bytes
[libx264 @ 0x1a6b5e0] frame= 363 QP=26.00 NAL=0 Slice:B Poc:206 I:0 P:0 SKIP:1200 size=19 bytes
[h264 @ 0x1b779a0] nal_unit_type: 1, nal_ref_idc: 3
[h264 @ 0x1b779a0] concealing 880 DC, 880 AC, 880 MV errors in P frame
*** 1 dup!
[libx264 @ 0x1a6b5e0] frame= 364 QP=26.00 NAL=0 Slice:B Poc:210 I:0 P:0 SKIP:1200 size=19 bytes
[libx264 @ 0x1a6b5e0] frame= 365 QP=16.71 NAL=2 Slice:P Poc:220 I:0 P:8 SKIP:1192 size=51 bytes
frame= 407 fps= 16 q=29.0 size= 306kB time=00:00:17.48 bitrate= 143.2kbits/s dup=329 drop=65 speed=0.675x
[rtsp @ 0x19b4fa0] max delay reached. need to consume packet
[rtsp @ 0x19b4fa0] RTP: missed 2 packets
[h264 @ 0x1b779a0] nal_unit_type: 9, nal_ref_idc: 0
[h264 @ 0x1b779a0] nal_unit_type: 1, nal_ref_idc: 3
[h264 @ 0x1b779a0] concealing 920 DC, 920 AC, 920 MV errors in P frame
*** 1 dup!
... a lot of the same ...
[libx264 @ 0x1a6b5e0] frame= 420 QP=25.50 NAL=0 Slice:B Poc:322 I:0 P:280 SKIP:920 size=282 bytes
[libx264 @ 0x1a6b5e0] frame= 421 QP=24.51 NAL=2 Slice:P Poc:326 I:0 P:43 SKIP:1157 size=112 bytes
[aac @ 0x1a79de0] Trying to remove 320 more samples than there are in the queue
frame= 422 fps=8.7 q=29.0 Lsize= 379kB time=00:00:17.54 bitrate= 176.7kbits/s dup=338 drop=65 speed=0.36x
video:240kB audio:123kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 4.257356%
Input file #0 (rtsp://192.168.193.182:5000/IcL8tHJdU9oWEK3rAAAA.sdp):
Input stream #0:0 (audio): 725 packets read (54182 bytes); 725 frames decoded (696000 samples);
Input stream #0:1 (video): 150 packets read (203332 bytes); 150 frames decoded;
Total: 875 packets (257514 bytes) demuxed
Output file #0 (/opt/mediasoup_sample/recordings/IcL8tHJdU9oWEK3rAAAA.mp4):
Output stream #0:0 (video): 422 frames encoded; 422 packets muxed (245681 bytes);
Output stream #0:1 (audio): 680 frames encoded (696000 samples); 681 packets muxed (126146 bytes);
Total: 1103 packets (371827 bytes) muxed
875 frames successfully decoded, 0 decoding errors
[AVIOContext @ 0x1a6c4e0] Statistics: 60 seeks, 1148 writeouts
[libx264 @ 0x1a6b5e0] frame I:3 Avg QP:17.89 size: 17026
[libx264 @ 0x1a6b5e0] frame P:120 Avg QP:18.27 size: 1244
[libx264 @ 0x1a6b5e0] frame B:299 Avg QP:24.50 size: 149
[libx264 @ 0x1a6b5e0] consecutive B-frames: 4.7% 1.9% 1.4% 91.9%
[libx264 @ 0x1a6b5e0] mb I I16..4: 19.9% 48.9% 31.1%
[libx264 @ 0x1a6b5e0] mb P I16..4: 2.1% 5.2% 0.8% P16..4: 10.3% 1.2% 0.6% 0.0% 0.0% skip:79.7%
[libx264 @ 0x1a6b5e0] mb B I16..4: 0.1% 0.1% 0.0% B16..8: 5.4% 0.2% 0.0% direct: 0.8% skip:93.5% L0:56.3% L1:43.1% BI: 0.5%
[libx264 @ 0x1a6b5e0] 8x8 transform intra:60.5% inter:62.3%
[libx264 @ 0x1a6b5e0] coded y,uvDC,uvAC intra: 40.2% 49.9% 19.0% inter: 0.7% 3.2% 0.1%
[libx264 @ 0x1a6b5e0] i16 v,h,dc,p: 26% 30% 9% 36%
[libx264 @ 0x1a6b5e0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 44% 27% 13% 3% 2% 2% 3% 3% 3%
[libx264 @ 0x1a6b5e0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 38% 28% 11% 3% 5% 4% 5% 4% 3%
[libx264 @ 0x1a6b5e0] i8c dc,h,v,p: 38% 28% 23% 12%
[libx264 @ 0x1a6b5e0] Weighted P-Frames: Y:2.5% UV:2.5%
[libx264 @ 0x1a6b5e0] ref P L0: 82.7% 3.3% 10.6% 3.3% 0.0%
[libx264 @ 0x1a6b5e0] ref B L0: 86.6% 12.6% 0.7%
[libx264 @ 0x1a6b5e0] ref B L1: 96.5% 3.5%
[libx264 @ 0x1a6b5e0] kb/s:139.34
[aac @ 0x1a79de0] Qavg: 212.691Thanks,
Johnathan Kanarek -
My journey to Coviu
27 octobre 2015, par silviaMy new startup just released our MVP – this is the story of what got me here.
I love creating new applications that let people do their work better or in a manner that wasn’t possible before.
My first such passion was as a student intern when I built a system for a building and loan association’s monthly customer magazine. The group I worked with was managing their advertiser contacts through a set of paper cards and I wrote a dBase based system (yes, that long ago) that would manage their customer relationships. They loved it – until it got replaced by an SAP system that cost 100 times what I cost them, had really poor UX, and only gave them half the functionality. It was a corporate system with ongoing support, which made all the difference to them.
The story repeated itself with a CRM for my Uncle’s construction company, and with a resume and quotation management system for Accenture right after Uni, both of which I left behind when I decided to go into research.
Even as a PhD student, I never lost sight of challenges that people were facing and wanted to develop technology to overcome problems. The aim of my PhD thesis was to prepare for the oncoming onslaught of audio and video on the Internet (yes, this was 1994 !) by developing algorithms to automatically extract and locate information in such files, which would enable users to structure, index and search such content.
Many of the use cases that we explored are now part of products or continue to be challenges : finding music that matches your preferences, identifying music or video pieces e.g. to count ads on the radio or to mark copyright infringement, or the automated creation of video summaries such as trailers.
This continued when I joined the CSIRO in Australia – I was working on segmenting speech into words or talk spurts since that would simplify captioning & subtitling, and on MPEG-7 which was a (slightly over-engineered) standard to structure metadata about audio and video.
In 2001 I had the idea of replicating the Web for videos : i.e. creating hyperlinked and searchable video-only experiences. We called it “Annodex” for annotated and indexed video and it needed full-screen hyperlinked video in browsers – man were we ahead of our time ! It was my first step into standards, got several IETF RFCs to my name, and started my involvement with open codecs through Xiph.
Around the time that YouTube was founded in 2006, I founded Vquence – originally a video search company for the Web, but pivoted to a video metadata mining company. Vquence still exists and continues to sell its data to channel partners, but it lacks the user impact that has always driven my work.
As the video element started being developed for HTML5, I had to get involved. I contributed many use cases to the W3C, became a co-editor of the HTML5 spec and focused on video captioning with WebVTT while contracting to Mozilla and later to Google. We made huge progress and today the technology exists to publish video on the Web with captions, making the Web more inclusive for everybody. I contributed code to YouTube and Google Chrome, but was keen to make a bigger impact again.
The opportunity came when a couple of former CSIRO colleagues who now worked for NICTA approached me to get me interested in addressing new use cases for video conferencing in the context of WebRTC. We worked on a kiosk-style solution to service delivery for large service organisations, particularly targeting government. The emerging WebRTC standard posed many technical challenges that we addressed by building rtc.io , by contributing to the standards, and registering bugs on the browsers.
Fast-forward through the development of a few further custom solutions for customers in health and education and we are starting to see patterns of need emerge. The core learning that we’ve come away with is that to get things done, you have to go beyond “talking heads” in a video call. It’s not just about seeing the other person, but much more about having a shared view of the things that need to be worked on and a shared way of interacting with them. Also, we learnt that the things that are being worked on are quite varied and may include multiple input cameras, digital documents, Web pages, applications, device data, controls, forms.
So we set out to build a solution that would enable productive remote collaboration to take place. It would need to provide an excellent user experience, it would need to be simple to work with, provide for the standard use cases out of the box, yet be architected to be extensible for specialised data sharing needs that we knew some of our customers had. It would need to be usable directly on Coviu.com, but also able to integrate with specialised applications that some of our customers were already using, such as the applications that they spend most of their time in (CRMs, practice management systems, learning management systems, team chat systems). It would need to require our customers to sign up, yet their clients to join a call without sign-up.
Collaboration is a big problem. People are continuing to get more comfortable with technology and are less and less inclined to travel distances just to get a service done. In a country as large as Australia, where 12% of the population lives in rural and remote areas, people may not even be able to travel distances, particularly to receive or provide recurring or specialised services, or to achieve work/life balance. To make the world a global village, we need to be able to work together better remotely.
The need for collaboration is being recognised by specialised Web applications already, such as the LiveShare feature of Invision for Designers, Codassium for pair programming, or the recently announced Dropbox Paper. Few go all the way to video – WebRTC is still regarded as a complicated feature to support.
With Coviu, we’d like to offer a collaboration feature to every Web app. We now have a Web app that provides a modern and beautifully designed collaboration interface. To enable other Web apps to integrate it, we are now developing an API. Integration may entail customisation of the data sharing part of Coviu – something Coviu has been designed for. How to replicate the data and keep it consistent when people collaborate remotely – that is where Coviu makes a difference.
We have started our journey and have just launched free signup to the Coviu base product, which allows individuals to own their own “room” (i.e. a fixed URL) in which to collaborate with others. A huge shout out goes to everyone in the Coviu team – a pretty amazing group of people – who have turned the app from an idea to reality. You are all awesome !
With Coviu you can share and annotate :
- images (show your mum photos of your last holidays, or get feedback on an architecture diagram from a customer),
- pdf files (give a presentation remotely, or walk a customer through a contract),
- whiteboards (brainstorm with a colleague), and
- share an application window (watch a YouTube video together, or work through your task list with your colleagues).
All of these are regarded as “shared documents” in Coviu and thus have zooming and annotations features and are listed in a document tray for ease of navigation.
This is just the beginning of how we want to make working together online more productive. Give it a go and let us know what you think.
The post My journey to Coviu first appeared on ginger’s thoughts.