
Recherche avancée
Autres articles (41)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Possibilité de déploiement en ferme
12 avril 2011, parMediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...) -
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.
Sur d’autres sites (6761)
-
Mirror not found when trying to install FFMPEG on CENTOS7
31 octobre 2016, par PeterI’m on a dedicated server with Root access. not familiar with servers. Im trying to install FFMpeg on my server but following online instructions I’m getting errors can’t figure out how to solve it. So any light on this will be very appreciated.
[root@ns335004 ~]# yum update
base | 3.6 kB 00:00:00
http://apt.sw.be/redhat/el7/en/x86_64/dag/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found
Trying other mirror.
To address this issue please refer to the below knowledge base article
https://access.redhat.com/articles/1320623
If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/
One of the configured repositories failed (DAG RPM Repository),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Disable the repository, so yum won't use it by default. Yum will then
just ignore the repository until you permanently enable it again or use
--enablerepo for temporary usage:
yum-config-manager --disable dag
4. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=dag.skip_if_unavailable=true
failure: repodata/repomd.xml from dag: [Errno 256] No more mirrors to try.
http://apt.sw.be/redhat/el7/en/x86_64/dag/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Foundrepolist
[root@ns335004 ~]# yum repolist all
http://apt.sw.be/redhat/el7/en/x86_64/dag/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found
Trying other mirror.
To address this issue please refer to the below knowledge base article
https://access.redhat.com/articles/1320623
If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/
http://apt.sw.be/redhat/el7/en/x86_64/dag/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found
Trying other mirror.
repo id repo name status
C7.0.1406-base/x86_64 CentOS-7.0.1406 - Base disabled
C7.0.1406-centosplus/x86_64 CentOS-7.0.1406 - CentOSPlus disabled
C7.0.1406-extras/x86_64 CentOS-7.0.1406 - Extras disabled
C7.0.1406-fasttrack/x86_64 CentOS-7.0.1406 - CentOSPlus disabled
C7.0.1406-updates/x86_64 CentOS-7.0.1406 - Updates disabled
C7.1.1503-base/x86_64 CentOS-7.1.1503 - Base disabled
C7.1.1503-centosplus/x86_64 CentOS-7.1.1503 - CentOSPlus disabled
C7.1.1503-extras/x86_64 CentOS-7.1.1503 - Extras disabled
C7.1.1503-fasttrack/x86_64 CentOS-7.1.1503 - CentOSPlus disabled
C7.1.1503-updates/x86_64 CentOS-7.1.1503 - Updates disabled
base/7/x86_64 CentOS-7 - Base enabled: 9,007
base-debuginfo/x86_64 CentOS-7 - Debuginfo disabled
base-source/7 CentOS-7 - Base Sources disabled
c7-media CentOS-7 - Media disabled
centosplus/7/x86_64 CentOS-7 - Plus disabled
centosplus-source/7 CentOS-7 - Plus Sources disabled
cr/7/x86_64 CentOS-7 - cr disabled
dag/7/x86_64 DAG RPM Repository enabled: 0
epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 enabled: 10,764
epel-debuginfo/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 - Debug disabled
epel-source/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 - Source disabled
epel-testing/x86_64 Extra Packages for Enterprise Linux 7 - Testing - x86_64 disabled
epel-testing-debuginfo/x86_64 Extra Packages for Enterprise Linux 7 - Testing - x86_64 - Debug disabled
epel-testing-source/x86_64 Extra Packages for Enterprise Linux 7 - Testing - x86_64 - Source disabled
extras/7/x86_64 CentOS-7 - Extras enabled: 393
extras-source/7 CentOS-7 - Extras Sources disabled
fasttrack/7/x86_64 CentOS-7 - fasttrack disabled
nux-dextop/x86_64 Nux.Ro RPMs for general desktop use disabled
nux-dextop-testing/x86_64 Nux.Ro RPMs for general desktop use - testing disabled
plesk-php-5.6 PHP v 5.6 for Plesk - x86_64 enabled: 31
plesk-php-7.0 PHP v 7.0 for Plesk - x86_64 enabled: 28
updates/7/x86_64 CentOS-7 - Updates enabled: 2,560
updates-source/7 CentOS-7 - Updates Sources disabled
repolist: 22,783I also tried :
sudo yum clean metadata
sudo yum clean allBut still having same 404 Error.
Thanks.
-
What is the correct way to write Frames in ffmpeg
6 février 2020, par hagorI am working on some ffmpeg writer implementation and I can not undrestand what do I do wrong.
I have a mdf (media digital file) file which I need to convert to avi and I have a software that does it. The test case : Avi file I get from my software and avi file I get from software are identical.
I can get frames from input mdf file, and can convert them to bmps correctly. So I suppose I do something wrong with ffmpeg.
I also need to use raw RGB codec in ffmpeg.Here is the code I wrote to fill avi files with frames :
if (hOffLoaderDVR && m_hDeviceCollection && device && hDriveSetDVR && hFile)
{
std::string camSuffix = "_cam_";
std::string cameraName = hFile->streamByIndex(streamC)->cameraPortName().c_str();
std::string fileName = pathToAviDir + hFile->parameters()->name.c_str() + camSuffix + cameraName + std::to_string(streamC).c_str() + ".avi";
Offload::Request request;
Common::DataTypeHandle cameraParams = hFile->streamByIndex(streamC)->streamView()->dataType();
AVFrame* frame = m_ffwriter.alloc_picture(AV_PIX_FMT_BGR24, cameraParams->width(), cameraParams->height());
size_t datasize = hFile->streamByIndex(streamC)->streamView()->frameAtIndex(0)->buffer()->size(); // size in bytes
RecordingParams params(fileName, cameraParams->width(), cameraParams->height(), 50,
AV_PIX_FMT_BGR24, datasize);
frame->pkt_size = datasize;
m_ffwriter.delayedOpen(params);
for (unsigned int frameC = 0; frameC < hFile->streamByIndex(streamC)->streamView()->frameCount(); frameC++)
{
m_ffwriter.fill_rgb_image(frame, hFile->streamByIndex(streamC)->streamView()->frameAtIndex(frameC)->buffer()->data());
m_ffwriter.putImage(frame);
}
m_ffwriter.close();
av_frame_free(&frame);
}To open the AVI file I use the function ffmpegWriter::delayedOpen :
bool FfmpegWriter::delayedOpen(const RecordingParams & params) {
unsigned int w = params.getWidth();
unsigned int h = params.getHeight();
unsigned int framerate = params.getFramerate();
unsigned int datasize = params.getDataSize();
m_filename = params.getPath();
unsigned int sample_rate = 0; //default
unsigned int channels = 0; //default
m_delayed = false;
if (w <= 0 || h <= 0) {
m_delayed = true;
return true;
}
m_ready = true;
// auto detect the output format from the name. default is mpeg.
m_fmt = av_guess_format(nullptr, m_filename.c_str(), nullptr);
m_fmt->video_codec = AV_CODEC_ID_RAWVIDEO; //can be moved to a parameter if required
if (!m_fmt) {
printf("Could not deduce output format from file extension: using MPEG.\n");
m_fmt = av_guess_format("mpeg", nullptr, nullptr);
}
if (!m_fmt) {
fprintf(stderr, "Could not find suitable output format\n");
::exit(1);
}
// allocate the output media context
m_oc = avformat_alloc_context();
if (!m_oc) {
fprintf(stderr, "Memory error\n");
::exit(1);
}
m_oc->oformat = m_fmt;
m_fmt->flags = AVFMT_NOTIMESTAMPS;
snprintf(m_oc->filename, sizeof(m_oc->filename), "%s", m_filename.c_str());
// add the audio and video streams using the default format codecs
// and initialize the codecs
m_video_st = nullptr;
m_audio_st = nullptr;
if (m_fmt->video_codec != AV_CODEC_ID_NONE) {
m_video_st = add_video_stream(m_oc, m_fmt->video_codec, w, h, framerate);
}
av_dump_format(m_oc, 0, m_filename.c_str(), 1);
// now that all the parameters are set, we can open
// video codecs and allocate the necessary encode buffers
if (m_video_st) {
open_video(m_oc, m_video_st, datasize);
}
// open the output file, if needed
if (!(m_fmt->flags & AVFMT_NOFILE)) {
if (avio_open(&m_oc->pb, m_filename.c_str(), AVIO_FLAG_WRITE) < 0) {
fprintf(stderr, "Could not open '%s'\n", m_filename.c_str());
::exit(1);
}
}
// write the stream header, if any
avformat_write_header(m_oc, NULL);
return true;
}And to fill images and put them into the AVI I use these functions :
void FfmpegWriter::fill_rgb_image(AVFrame *pict, void *p)
{
memcpy(pict->data[0], p, pict->pkt_size);
}
bool FfmpegWriter::putImage(AVFrame * newFrame) {
if (m_delayed) {
// savedConfig.put("width",Value((int)image.width()));
// savedConfig.put("height",Value((int)image.height()));
}
if (!isOk()) {
return false;
}
if (m_video_st) {
m_video_pts = (double)av_stream_get_end_pts(m_video_st) *m_video_st->time_base.num / m_video_st->time_base.den;
}
else {
m_video_pts = 0.0;
}
if (!(m_video_st)) {
return false;
}
// write interleaved video frame
write_video_frame(m_oc, m_video_st, newFrame);
return true;
}Do I not open context correctly ? Or where might be the problem ? The problems I can see are that the output AVI has around minute delay in the beginning with no frames changing, and the video channels behave differently(it seems that red and blue dissapeared). Does it make any difference to use other format ? I currently use AV_PIX_FMT_BGR24 which seems to be correct (I can visualize frames from the same pointer correctly).
Thank you for your help !
-
How to Read DJI H264 FPV Feed as OpenCV Mat Object ?
29 mai 2019, par Walter MorawaTDLR : All DJI developers would benefit from decoding raw H264 video stream byte arrays to a format compatible with OpenCV.
I’ve spent a lot of time looking for a solution to reading DJI’s FPV feed as an OpenCV Mat object. I am probably overlooking something fundamental, since I am not too familiar with Image Encoding/Decoding.
Future developers who come across it will likely run into a bunch of the same issues I had. It would be great if DJI developers could use opencv directly without needing a 3rd party library.
I’m willing to use ffmpeg or JavaCV if necessary, but that’s quite the hurdle for most Android developers as we’re going to have to use cpp, ndk, terminal for testing, etc. That seems like overkill. Both options seem quite time consuming. This JavaCV H264 conversion seems unnecessarily complex. I found it from this relevant question.
I believe the issue lies in the fact that we need to decode both the byte array of length 6 (info array) and the byte array with current frame info simultaneously.
Basically, DJI’s FPV feed comes in a number of formats.
- Raw H264 (MPEG4) in VideoFeeder.VideoDataListener
// The callback for receiving the raw H264 video data for camera live view
mReceivedVideoDataListener = new VideoFeeder.VideoDataListener() {
@Override
public void onReceive(byte[] videoBuffer, int size) {
//Log.d("BytesReceived", Integer.toString(videoStreamFrameNumber));
if (videoStreamFrameNumber++%30 == 0){
//convert video buffer to opencv array
OpenCvAndModelAsync openCvAndModelAsync = new OpenCvAndModelAsync();
openCvAndModelAsync.execute(videoBuffer);
}
if (mCodecManager != null) {
mCodecManager.sendDataToDecoder(videoBuffer, size);
}
}
};- DJI also has it’s own Android decoder sample with FFMPEG to convert to YUV format.
@Override
public void onYuvDataReceived(final ByteBuffer yuvFrame, int dataSize, final int width, final int height) {
//In this demo, we test the YUV data by saving it into JPG files.
//DJILog.d(TAG, "onYuvDataReceived " + dataSize);
if (count++ % 30 == 0 && yuvFrame != null) {
final byte[] bytes = new byte[dataSize];
yuvFrame.get(bytes);
AsyncTask.execute(new Runnable() {
@Override
public void run() {
if (bytes.length >= width * height) {
Log.d("MatWidth", "Made it");
YuvImage yuvImage = saveYuvDataToJPEG(bytes, width, height);
Bitmap rgbYuvConvert = convertYuvImageToRgb(yuvImage, width, height);
Mat yuvMat = new Mat(height, width, CvType.CV_8UC1);
yuvMat.put(0, 0, bytes);
//OpenCv Stuff
}
}
});
}
}Edit : For those who want to see DJI’s YUV to JPEG function, here it is from the sample application :
private YuvImage saveYuvDataToJPEG(byte[] yuvFrame, int width, int height){
byte[] y = new byte[width * height];
byte[] u = new byte[width * height / 4];
byte[] v = new byte[width * height / 4];
byte[] nu = new byte[width * height / 4]; //
byte[] nv = new byte[width * height / 4];
System.arraycopy(yuvFrame, 0, y, 0, y.length);
Log.d("MatY", y.toString());
for (int i = 0; i < u.length; i++) {
v[i] = yuvFrame[y.length + 2 * i];
u[i] = yuvFrame[y.length + 2 * i + 1];
}
int uvWidth = width / 2;
int uvHeight = height / 2;
for (int j = 0; j < uvWidth / 2; j++) {
for (int i = 0; i < uvHeight / 2; i++) {
byte uSample1 = u[i * uvWidth + j];
byte uSample2 = u[i * uvWidth + j + uvWidth / 2];
byte vSample1 = v[(i + uvHeight / 2) * uvWidth + j];
byte vSample2 = v[(i + uvHeight / 2) * uvWidth + j + uvWidth / 2];
nu[2 * (i * uvWidth + j)] = uSample1;
nu[2 * (i * uvWidth + j) + 1] = uSample1;
nu[2 * (i * uvWidth + j) + uvWidth] = uSample2;
nu[2 * (i * uvWidth + j) + 1 + uvWidth] = uSample2;
nv[2 * (i * uvWidth + j)] = vSample1;
nv[2 * (i * uvWidth + j) + 1] = vSample1;
nv[2 * (i * uvWidth + j) + uvWidth] = vSample2;
nv[2 * (i * uvWidth + j) + 1 + uvWidth] = vSample2;
}
}
//nv21test
byte[] bytes = new byte[yuvFrame.length];
System.arraycopy(y, 0, bytes, 0, y.length);
for (int i = 0; i < u.length; i++) {
bytes[y.length + (i * 2)] = nv[i];
bytes[y.length + (i * 2) + 1] = nu[i];
}
Log.d(TAG,
"onYuvDataReceived: frame index: "
+ DJIVideoStreamDecoder.getInstance().frameIndex
+ ",array length: "
+ bytes.length);
YuvImage yuver = screenShot(bytes,Environment.getExternalStorageDirectory() + "/DJI_ScreenShot", width, height);
return yuver;
}
/**
* Save the buffered data into a JPG image file
*/
private YuvImage screenShot(byte[] buf, String shotDir, int width, int height) {
File dir = new File(shotDir);
if (!dir.exists() || !dir.isDirectory()) {
dir.mkdirs();
}
YuvImage yuvImage = new YuvImage(buf,
ImageFormat.NV21,
width,
height,
null);
OutputStream outputFile = null;
final String path = dir + "/ScreenShot_" + System.currentTimeMillis() + ".jpg";
try {
outputFile = new FileOutputStream(new File(path));
} catch (FileNotFoundException e) {
Log.e(TAG, "test screenShot: new bitmap output file error: " + e);
//return;
}
if (outputFile != null) {
yuvImage.compressToJpeg(new Rect(0,
0,
width,
height), 100, outputFile);
}
try {
outputFile.close();
} catch (IOException e) {
Log.e(TAG, "test screenShot: compress yuv image error: " + e);
e.printStackTrace();
}
runOnUiThread(new Runnable() {
@Override
public void run() {
displayPath(path);
}
});
return yuvImage;
}- DJI also appears to have a "getRgbaData" function, but there is literally not a single example online or by DJI. Go ahead and Google "DJI getRgbaData"... There’s only the reference to the api documentation that explains the self explanatory parameters and return values but nothing else. I couldn’t figure out where to call this and there doesn’t appear to be a callback function as there is with YUV. You can’t call it from the h264b byte array directly, but perhaps you can get it from the yuv data.
Option 1 is much more preferable to option 2, since YUV format has quality issues. Option 3 would also likely involve a decoder.
Here’s a screenshot that DJI’s own YUV conversion produces.
I’ve looked at a bunch of things about how to improve the YUV, remove green and yellow colors and whatnot, but at this point if DJI can’t do it right, I don’t want to invest resources there.
Regarding Option 1, I know there’s FFMPEG and JavaCV that seem like good options if I have to go the video decoding route.
Moreover, from what I understand, OpenCV can’t handle reading and writing video files without FFMPEG, but I’m not trying to read a video file, I am trying to read an H264/MPEG4 byte[] array. The following code seems to get positive results.
/* Async OpenCV Code */
private class OpenCvAndModelAsync extends AsyncTask {
@Override
protected double[] doInBackground(byte[]... params) {//Background Code Executing. Don't touch any UI components
//get fpv feed and convert bytes to mat array
Mat videoBufMat = new Mat(4, params[0].length, CvType.CV_8UC4);
videoBufMat.put(0,0, params[0]);
//if I add this in it says the bytes are empty.
//Mat videoBufMat = Imgcodecs.imdecode(encodeVideoBuf, Imgcodecs.IMREAD_ANYCOLOR);
//encodeVideoBuf.release();
Log.d("MatRgba", videoBufMat.toString());
for (int i = 0; i< videoBufMat.rows(); i++){
for (int j=0; j< videoBufMat.cols(); j++){
double[] rgb = videoBufMat.get(i, j);
Log.i("Matrix", "red: "+rgb[0]+" green: "+rgb[1]+" blue: "+rgb[2]+" alpha: "
+ rgb[3] + " Length: " + rgb.length + " Rows: "
+ videoBufMat.rows() + " Columns: " + videoBufMat.cols());
}
}
double[] center = openCVThingy(videoBufMat);
return center;
}
protected void onPostExecute(double[] center) {
//handle ui or another async task if necessary
}
}Rows = 4, Columns > 30k. I get lots of RGB values that seem valid, such as red = 113, green=75, blue=90, alpha=220 as a made up example ; however, I get a ton of 0,0,0,0 values. That should be somewhat okay, since Black is 0,0,0 (although I would have thought the alpha would be higher) and I have a black object in my image. I also don’t seem to get any white values 255, 255, 255, even though there is also plenty of white area. I’m not logging the entire byte so it could be there, but I have yet to see it.
However, when I try to compute the contours from this image, I almost always get that the moments (center x, y) are exactly in the center of the image. This error has nothing to do with my color filter or contours algorithm, as I wrote a script in python and tested that I implemented it correctly in Android by reading a still image and getting the exact same number of contours, position, etc in both Python and Android.
I noticed it has something to do with the videoBuffer byte size (bonus points if you can explain why every other length is 6)
2019-05-23 21:14:29.601 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2425
2019-05-23 21:14:29.802 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2659
2019-05-23 21:14:30.004 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
2019-05-23 21:14:30.263 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6015
2019-05-23 21:14:30.507 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
2019-05-23 21:14:30.766 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4682
2019-05-23 21:14:31.005 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
2019-05-23 21:14:31.234 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2840
2019-05-23 21:14:31.433 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4482
2019-05-23 21:14:31.664 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
2019-05-23 21:14:31.927 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4768
2019-05-23 21:14:32.174 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
2019-05-23 21:14:32.433 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4700
2019-05-23 21:14:32.668 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
2019-05-23 21:14:32.864 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4740
2019-05-23 21:14:33.102 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
2019-05-23 21:14:33.365 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4640My questions :
I. Is this the correct format to read an h264 byte as mat ?
Assuming the format is RGBA, that means row = 4 and columns = byte[].length, and CvType.CV_8UC4. Do I have height and width correct ? Something tells me YUV height and width is off. I was getting some meaningful results, but the contours were exactly in the center, just like with the H264.II. Does OpenCV handle MP4 in android like this ? If not, do we need to use FFMPEG or JavaCV ?
III. Does the int size have something to do with it ? Why is the int size occassionally 6, and other times 2400 to 6000 ? I’ve heard about the difference between this frames information and information about the next frame, but I’m simply not knowledgeable enough to know how to apply that here.
I’m starting to think this is where the issue lies. Since I need to get the 6 byte array for info about next frame, perhaps my modulo 30 is incorrect. So should I pass the 29th or 31st frame as a format byte for each frame ? How is that done in opencv or are we doomed to use the complicated ffmpeg ? How would I go about joining the neighboring frames/ byte arrays ?
IV. Can I fix this using Imcodecs ? I was hoping opencv would natively handle whether a frame was color from this frame or info about next frame. I added the below code, but I am getting an empty array :
Mat videoBufMat = Imgcodecs.imdecode(new MatOfByte(params[0]), Imgcodecs.IMREAD_UNCHANGED);
This also is empty :
Mat encodeVideoBuf = new Mat(4, params[0].length, CvType.CV_8UC4);
encodeVideoBuf.put(0,0, params[0]);
Mat videoBufMat = Imgcodecs.imdecode(encodeVideoBuf, Imgcodecs.IMREAD_UNCHANGED);V. Should I try converting the bytes into Android jpeg and then import it ? Why is djis yuv decoder so complicated looking ? It makes me cautious from wanting to try ffmpeg or Javacv and just stick to Android decoder or opencv decoder.
VI. At what stage should I resize the frames to speed up calculations ?
Edit : DJI support got back to me and confirmed they don’t have any samples for doing what I’ve described. This is a time for we the community to make this available for everyone !
Upon further research, I don’t think opencv will be able to handle this as opencv’s android sdk has no functionality for video files/url’s (apart from a homegrown MJPEG codec).
So is there a way in Android to convert to mjpeg or similar in order to read ? In my application, I only need 1 or 2 frames per second, so perhaps I can save the image as jpeg.
But for real time applications we will likely need to write our own decoder. Please help so that we can make this available to everyone ! This question seems promising :