
Recherche avancée
Médias (91)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (20)
-
L’espace de configuration de MediaSPIP
29 novembre 2010, parL’espace de configuration de MediaSPIP est réservé aux administrateurs. Un lien de menu "administrer" est généralement affiché en haut de la page [1].
Il permet de configurer finement votre site.
La navigation de cet espace de configuration est divisé en trois parties : la configuration générale du site qui permet notamment de modifier : les informations principales concernant le site (...) -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Librairies et logiciels spécifiques aux médias
10 décembre 2010, parPour un fonctionnement correct et optimal, plusieurs choses sont à prendre en considération.
Il est important, après avoir installé apache2, mysql et php5, d’installer d’autres logiciels nécessaires dont les installations sont décrites dans les liens afférants. Un ensemble de librairies multimedias (x264, libtheora, libvpx) utilisées pour l’encodage et le décodage des vidéos et sons afin de supporter le plus grand nombre de fichiers possibles. Cf. : ce tutoriel ; FFMpeg avec le maximum de décodeurs et (...)
Sur d’autres sites (6011)
-
Error while compile ffmpeg on cygwin for android [closed]
7 novembre 2011, par SatheeshPossible Duplicate :
Error when run the command ./build.sh in FFmpeg decoder in AndroidI have try to implement the ffmpeg decoding library in my android project.Dowload FFmpeg.I got the ffmpeg library from the bambuser client version and unpack that into the project jni folder.Then i run the extract command(./extract.sh) in cygwin compiler tool(its becoz am using windows os) after that am trying to build ffmpeg library using the command ./build.sh
I have googled a lot but i cant make it work. Please help me out from this.
NDK version : NDKr5b Cygwin tool version : 1.7
My build.sh file is
/************Command starts here******************/
#!/bin/bash
if [ "$NDK" = "" ]; then
echo NDK variable not set, assuming ${HOME}/android-ndk
export NDK=${HOME}/android-ndk
fi
SYSROOT=$NDK/platforms/android-8/arch-arm
# Expand the prebuilt/* path into the correct one
TOOLCHAIN=`echo $NDK/toolchains/arm-linux-androideabi-4.4.3/prebuilt/windows`
export PATH=$TOOLCHAIN/bin:$PATH
rm -rf build/ffmpeg
mkdir -p build/ffmpeg
cd ffmpeg
# Don't build any neon version for now
for version in armv5te armv7a
do
DEST=../build/ffmpeg
FLAGS="--target-os=linux --arch=arm"
FLAGS="$FLAGS --cross-prefix=arm-linux-androideabi- --arch=arm"
FLAGS="$FLAGS --sysroot=$SYSROOT"
FLAGS="$FLAGS --soname-prefix=/data/data/org.rifluxyss.androiddev.livewallpapaerffmpeg/lib/"
FLAGS="$FLAGS --enable-cross-compile --cc=$TOOLCHAIN/bin/arm-linux-androideabi-gcc"
FLAGS="$FLAGS --enable-shared --disable-symver"
FLAGS="$FLAGS --nm=$TOOLCHAIN/bin/arm-linux-androideabi-nm"
FLAGS="$FLAGS --ar=$TOOLCHAIN/bin/arm-linux-androideabi-ar"
FLAGS="$FLAGS --ranlib=$TOOLCHAIN/bin/arm-linux-androideabi-ranlib"
FLAGS="$FLAGS --enable-small --optimization-flags=-O2"
FLAGS="$FLAGS --enable-encoder=mpeg2video --enable-encoder=nellymoser --enable-memalign-hack "
FLAGS="$FLAGS --disable-ffmpeg --disable-ffplay"
FLAGS="$FLAGS --disable-ffserver --disable-ffprobe --disable-encoders"
FLAGS="$FLAGS --disable-muxers --disable-devices --disable-protocols"
FLAGS="$FLAGS --enable-protocol=file --enable-avfilter"
FLAGS="$FLAGS --disable-network --enable-pthreads --enable-avutil"
FLAGS="$FLAGS --disable-avdevice --disable-asm --extra-libs=-lgcc"
case "$version" in
neon)
EXTRA_CFLAGS="-march=armv7-a -mfloat-abi=softfp -mfpu=neon"
EXTRA_LDFLAGS="-Wl,--fix-cortex-a8"
# Runtime choosing neon vs non-neon requires
# renamed files
ABI="armeabi-v7a"
;;
armv7a)
EXTRA_CFLAGS="-march=armv7-a -mfloat-abi=softfp"
EXTRA_LDFLAGS=""
ABI="armeabi-v7a"
;;
*)
EXTRA_CFLAGS=""
EXTRA_LDFLAGS=""
ABI="armeabi"
;;
esac
DEST="$DEST/$ABI"
FLAGS="$FLAGS --prefix=$DEST"
mkdir -p $DEST
echo $FLAGS --extra-cflags="$EXTRA_CFLAGS" --extra-ldflags="$EXTRA_LDFLAGS" > $DEST/info.txt
./configure $FLAGS --extra-cflags="$EXTRA_CFLAGS" --extra-ldflags="$EXTRA_LDFLAGS" | tee $DEST/configuration.txt
[ $PIPESTATUS == 0 ] || exit 1
make clean
make -j4 || exit 1
make install || exit 1
done
/****************Command ends here************************/While compiling this am getting error when linking the archive files into .so files.
The error is look like
"arm-linux-androideabi/bin/ld.exe : cannot find -lavutil
Collect2:ld returned 1 exit status
make :[libaswscale/libswscale.so] Error 1
make : waititng for unfinished jobs...."Please anyone help me to out from this.
-
Video created using H263 codec and ffmpeg does not play on android device [closed]
21 mars 2013, par susheel tickooI have created a video using FFmpeg and H263 codec. But when I play the video on an Android device the player is unable to play it. I have used both the extensions .mp4 and .3gp.
void generate(JNIEnv *pEnv, jobject pObj,jobjectArray stringArray,int famerate,int width,int height,jstring videoFilename)
{
AVCodec *codec;
AVCodecContext *c= NULL;
//int framesnum=5;
int i,looper, out_size, size, x, y,encodecbuffsize,j;
__android_log_write(ANDROID_LOG_INFO, "record","************into generate************");
int imagecount= (*pEnv)->GetArrayLength(pEnv, stringArray);
__android_log_write(ANDROID_LOG_INFO, "record","************got magecount************");
int retval=-10;
FILE *f;
AVFrame *picture,*encoded_avframe;
uint8_t *encodedbuffer;
jbyte *raw_record;
char logdatadata[100];
int returnvalue = -1,numBytes =-1;
const char *gVideoFileName = (char *)(*pEnv)->GetStringUTFChars(pEnv, videoFilename, NULL);
__android_log_write(ANDROID_LOG_INFO, "record","************got video file name************");
/* find the mpeg1 video encoder */
codec = avcodec_find_encoder(CODEC_ID_H264);
if (!codec) {
__android_log_write(ANDROID_LOG_INFO, "record","codec not found");
exit(1);
}
c= avcodec_alloc_context();
/*c->bit_rate = 400000;
c->width = width;
c->height = height;
c->time_base= (AVRational){1,famerate};
c->gop_size = 12; // emit one intra frame every ten frames
c->max_b_frames=0;
c->pix_fmt = PIX_FMT_YUV420P;
c->codec_type = AVMEDIA_TYPE_VIDEO;
c->codec_id = CODEC_ID_H263;*/
c->bit_rate = 400000;
// resolution must be a multiple of two
c->width = 176;
c->height = 144;
c->pix_fmt = PIX_FMT_YUV420P;
c->qcompress = 0.0;
c->qblur = 0.0;
c->gop_size = 20; //or 1
c->sub_id = 1;
c->workaround_bugs = FF_BUG_AUTODETECT;
//pFFmpeg->c->time_base = (AVRational){1,25};
c->time_base.num = 1;
c->time_base.den = famerate;
c->max_b_frames = 0; //pas de B frame en H263
// c->opaque = opaque;
c->dct_algo = FF_DCT_AUTO;
c->idct_algo = FF_IDCT_AUTO;
//lc->rtp_mode = 0;
c->rtp_payload_size = 1000;
c->rtp_callback = 0; // ffmpeg_rtp_callback;
c->flags |= CODEC_FLAG_QSCALE;
c->mb_decision = FF_MB_DECISION_RD;
c->thread_count = 1;
#define DEFAULT_RATE (16 * 8 * 1024)
c->rc_min_rate = DEFAULT_RATE;
c->rc_max_rate = DEFAULT_RATE;
c->rc_buffer_size = DEFAULT_RATE * 64;
c->bit_rate = DEFAULT_RATE;
sprintf(logdatadata, "------width from c ---- = %d",width);
__android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
sprintf(logdatadata, "------height from c ---- = %d",height);
__android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
__android_log_write(ANDROID_LOG_INFO, "record","************Found codec and now opening it************");
/* open it */
retval = avcodec_open(c, codec);
if ( retval < 0)
{
sprintf(logdatadata, "------avcodec_open ---- retval = %d",retval);
__android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
__android_log_write(ANDROID_LOG_INFO, "record","could not open codec");
exit(1);
}
__android_log_write(ANDROID_LOG_INFO, "record","statement 5");
f = fopen(gVideoFileName, "ab");
if (!f) {
__android_log_write(ANDROID_LOG_INFO, "record","could not open video file");
exit(1);
}
__android_log_write(ANDROID_LOG_INFO, "record", "***************Allocating encodedbuffer*********\n");
encodecbuffsize = avpicture_get_size(PIX_FMT_RGB24, c->width, c->height);
sprintf(logdatadata, "encodecbuffsize = %d",encodecbuffsize);
__android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
encodedbuffer = malloc(encodecbuffsize);
jclass cls = (*pEnv)->FindClass(pEnv, "com/canvasm/mediclinic/VideoGenerator");
jmethodID mid = (*pEnv)->GetMethodID(pEnv, cls, "videoProgress", "(Ljava/lang/String;)Ljava/lang/String;");
jmethodID mid_delete = (*pEnv)->GetMethodID(pEnv, cls, "deleteTempFile", "(Ljava/lang/String;)Ljava/lang/String;");
if (mid == 0)
return;
__android_log_write(ANDROID_LOG_INFO, "native","got method id");
for(i=0;i<=imagecount;i++) {
jboolean isCp;
int progress = 0;
float temp;
jstring string;
if(i==imagecount)
string = (jstring) (*pEnv)->GetObjectArrayElement(pEnv, stringArray, imagecount-1);
else
string = (jstring) (*pEnv)->GetObjectArrayElement(pEnv, stringArray, i);
const char *rawString = (*pEnv)->GetStringUTFChars(pEnv, string, &isCp);
__android_log_write(ANDROID_LOG_INFO, "record",rawString);
picture = OpenImage(rawString,width,height);
//WriteJPEG(c,picture,i);
// encode video
memset(encodedbuffer,0,encodecbuffsize);
//do{
for(looper=0;looper<5;looper++)
{
memset(encodedbuffer,0,encodecbuffsize);
out_size = avcodec_encode_video(c, encodedbuffer, encodecbuffsize, picture);
sprintf(logdatadata, "avcodec_encode_video ----- out_size = %d \n",out_size );
__android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
if(out_size>0)
break;
}
__android_log_write(ANDROID_LOG_INFO, "record","*************Start looping for same image*******");
returnvalue = fwrite(encodedbuffer, 1, out_size, f);
sprintf(logdatadata, "fwrite ----- returnvalue = %d \n",returnvalue );
__android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
__android_log_write(ANDROID_LOG_INFO, "record","*************End looping for same image*******");
// publishing progress
progress = ((i*100)/(imagecount+1))+15;//+1 is for last frame duplicated entry
if(progress<20 )
progress =20;
if(progress>=95 )
progress =95;
sprintf(logdatadata, "%d",progress );
jstring jstrBuf = (*pEnv)->NewStringUTF(pEnv, logdatadata);
(*pEnv)->CallObjectMethod(pEnv, pObj, mid,jstrBuf);
if(i>0)
(*pEnv)->CallObjectMethod(pEnv, pObj, mid_delete,string);
}
/* get the delayed frames */
for(; out_size; i++) {
fflush(stdout);
out_size = avcodec_encode_video(c, encodedbuffer, encodecbuffsize, NULL);
fwrite(encodedbuffer, 20, out_size, f);
}
/* add sequence end code to have a real mpeg file */
encodedbuffer[0] = 0x00;
encodedbuffer[1] = 0x00;
encodedbuffer[2] = 0x01;
encodedbuffer[3] = 0xb7;
fwrite(encodedbuffer, 1, 4, f);
fclose(f);
free(encodedbuffer);
avcodec_close(c);
av_free(c);
__android_log_write(ANDROID_LOG_INFO, "record","Video created ");
// last updation of 100%
sprintf(logdatadata, "%d",100 );
jstring jstrBuf = (*pEnv)->NewStringUTF(pEnv, logdatadata);
(*pEnv)->CallObjectMethod(pEnv, pObj, mid,jstrBuf);
}
AVFrame* OpenImage(const char* imageFileName,int w,int h)
{
AVFrame *pFrame;
AVCodec *pCodec ;
AVFormatContext *pFormatCtx;
AVCodecContext *pCodecCtx;
uint8_t *buffer;
int frameFinished,framesNumber = 0,retval = -1,numBytes=0;
AVPacket packet;
char logdatadata[100];
//__android_log_write(ANDROID_LOG_INFO, "OpenImage",imageFileName);
if(av_open_input_file(&pFormatCtx, imageFileName, NULL, 0, NULL)!=0)
//if(avformat_open_input(&pFormatCtx,imageFileName,NULL,NULL)!=0)
{
__android_log_write(ANDROID_LOG_INFO, "record",
"Can't open image file ");
return NULL;
}
pCodecCtx = pFormatCtx->streams[0]->codec;
pCodecCtx->width = w;
pCodecCtx->height = h;
pCodecCtx->pix_fmt = PIX_FMT_YUV420P;
// Find the decoder for the video stream
pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
if (!pCodec)
{
__android_log_write(ANDROID_LOG_INFO, "record",
"Can't open image file ");
return NULL;
}
pFrame = avcodec_alloc_frame();
numBytes = avpicture_get_size(PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height);
buffer = (uint8_t *) av_malloc(numBytes * sizeof(uint8_t));
sprintf(logdatadata, "numBytes = %d",numBytes);
__android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
retval = avpicture_fill((AVPicture *) pFrame, buffer, PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height);
// Open codec
if(avcodec_open(pCodecCtx, pCodec)<0)
{
__android_log_write(ANDROID_LOG_INFO, "record","Could not open codec");
return NULL;
}
if (!pFrame)
{
__android_log_write(ANDROID_LOG_INFO, "record","Can't allocate memory for AVFrame\n");
return NULL;
}
int readval = -5;
while (readval = av_read_frame(pFormatCtx, &packet) >= 0)
{
if(packet.stream_index != 0)
continue;
int ret = avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
sprintf(logdatadata, "avcodec_decode_video2 ret = %d",ret);
__android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
if (ret > 0)
{
__android_log_write(ANDROID_LOG_INFO, "record","Frame is decoded\n");
pFrame->quality = 4;
av_free_packet(&packet);
av_close_input_file(pFormatCtx);
return pFrame;
}
else
{
__android_log_write(ANDROID_LOG_INFO, "record","error while decoding frame \n");
}
}
sprintf(logdatadata, "readval = %d",readval);
__android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
}The
generate
method takes a list of strings (path to images) and converts them to video and theOpenImage
method is responsible for convertign a single image toAVFrame
. -
Cuda Memory Management : re-using device memory from C calls (multithreaded, ffmpeg), but failing on cudaMemcpy
4 mars 2013, par Nuke StollakI'm trying to CUDA-fy my ffmpeg filter that was taking over 90% of the CPU time, according to gprof. I first went from one core to OpenMP on 4 cores and got a 3.8x increase in frames encoded per second, but it's still too slow. CUDA seemed like the next natural step.
I've gotten a modest (20% ?) increase by replacing one of my filter's functions with a CUDA kernel call, and just to get things up and running, I was cudaMalloc'ing and cudaMemcpy'ing on each frame. I suspected I would get better results if I weren't doing this each frame, so before I go ahead and move the rest of my code to CUDA, I wanted to fix this by allocating the memory before my filter is called and freeing it afterwards, but the device memory isn't having it. I'm only storing the device memory locations outside of code that knows about CUDA ; I'm not trying to use the data there, just save it for the next time I call a CUDA-aware function that needs it.
Here's where I am so far :
Environment : the last AMI Linux on EC2's GPU Cluster, latest updates installed. Everything is fairly standard.
My filter is split into two files : vf_myfilter.c (compiled by gcc, like almost every other file in ffmpeg) and vf_myfilter_cu.cu (compiled by nvcc). My Makefile's link step includes
-lcudart
and both .o files. I build vf_myfilter_cu.o using (as one line)nvcc -I. -I./ -I/opt/nvidia/cuda/include $(CPPFLAGS)
-Xcompiler "$(CFLAGS)"
-c -o libfilter/vf_myfilter_cu.o libfilter/vf_myfilter_cu.cuWhen the variables (set by configure) are expanded, here's what I get, again all in one line but split up here for easier reading. I just noticed the duplicate include path directives, but it shouldn't hurt.
nvcc -I. -I./ -I/opt/nvidia/cuda/include -I. -I./ -D_ISOC99_SOURCE
-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_POSIX_C_SOURCE=200112
-D_XOPEN_SOURCE=600 -DHAVE_AV_CONFIG_H
-XCompiler "-fopenmp -std=c99 -fomit-frame-pointer -pthread -g
-Wdeclaration-after-statment -Wall -Wno-parentheses
-Wno-switch -Wno-format-zero-length -Wdisabled-optimization
-Wpointer-arith -Wredundant-decls -Wno-pointer-sign
-Wwrite-strings -Wtype-limits -Wundef -Wmissing-prototypes
-Wno-pointer-to-int-case -Wstrict-prototypes -O3 -fno-math-errno
-fno-signed-zeros -fno-tree-vectorize
-Werror=implicit-function-declaration -Werror=missing-prototypes
-Werror=vla "
-c -o libavfilter/vf_myfilter_cu.o libavfilter/vf_myfilter_cu.cuvf_myfilter.c calls three functions from vf_myfilter_cu.cu file which handle memory and call the CUDA kernel code. I thought I would be able to save the device pointers from my memory initialization, which runs once per ffmpeg run, and re-use that space each time I called the wrapper for my kernel function, but when I cudaMemcpy from my host memory to my device memory that I stored, it fails with cudaInvalidValue. If I cudaMalloc my device memory on every frame, I'm fine.
I plan on using pinned host memory, once I have everything up in CUDA code and have minimized the number of times I need to return to the main ffmpeg code.
Steps taken :
First sign of trouble : search the web. I found Passing a pointer to device memory between classes in CUDA and printed out the pointers at various places in my execution to ensure that the device memory values were the same everywhere, and they are. FWIW, they seem to start around 0x90010000.
ffmpeg's
configure
gave me -pthreads, so I checked to see if my filter was being called from multiple threads according to how can I tell if pthread_self is the main (first) thread in the process ? and checkingsyscall(SYS_gettid) == getpid()
to ensure that I'm not calling CUDA from different threads—I'm indeed in the primary thread at every step, according to those two funcs. I am still using OpenMP later around some for loops in the main .c filter function, but the calls to CUDA don't occur in those loops.Code Overview :
ffmpeg provides me a MyfilterContext structure pointer on each frame, as well as on the filter's config_input and uninit routines (called once per file), so I added some *host_var and *dev_var variables (a few of each, float and unsigned char).
There is a whole lot of code I skipped for this post, but most of it has to do with my algorithm and details involved in writing an ffmpeg filter. I'm actually using about 6 host variables and 7 device variables right now, but for demonstration I limited it to one of each.
Here is, broadly, what my vf_myfilter.c looks like.
// declare my functions from vf_myfilter_cu.cu
extern void cudaMyInit(unsigned char **dev_var, size_t mysize);
extern void cudaMyUninit(unsigned char *dev_var);
extern void cudaMyFunction(unsigned char *host_var, unsigned char *dev_var, size_t mysize);
// part of the MyFilterContext structure, which ffmpeg keeps track of for me.
typedef struct {
unsigned char *host_var;
unsigned char *dev_var;
} MyFilterContext;
// ffmpeg calls this function once per file, before any frames are processed.
static int config_input(AVFilterLink *inlink) {
// how ffmpeg passes me my context, fairly standard.
MyfilterContext * myContext = inlink->dst->priv;
// compute the size one video plane of one frame of video
size_t mysize = sizeof(unsigned char) * inlink->w * inlink->h;
// av_mallocz is a malloc wrapper provided and required by ffmpeg
myContext->host_var = (unsigned char*) av_mallocz(size);
// Here's where I attempt to allocate my device memory.
cudaMyInit( & myContext->dev_var, mysize);
}
// Called once per frame of video
static int filter_frame(AVFilterLink *inlink, AVFilterBufferRef *frame) {
MyFilterContext *myContext = inlink->dst->priv;
// sanity check to make sure that this isn't part of the multithreaded code
if ( syscall(SYS_gettid) == getpid() )
av_log(.... ); // This line never runs, so it's not threaded?
// ...fill host_var with data from frame,
// set mysize to the size of the buffer
// Call my wrapper function defined in the .cu file
cudaMyFunction(myContext->host_var, myContext->dev_var, mysize);
// ... take the results from host_var and apply them to frame
// ... and return the processed frame to ffmpeg
}
// called after everything else has happened: free up the memory.
static av_cold void uninit(AVFilterContext *ctx) {
MyFilterContext *myContext = ctx->priv;
// free my host_var
if(myContext->host_var!=NULL) {
av_free(myContext->host_var);
myContext->host_var=NULL;
}
// free my dev_var
cudaMyUninit(myContext->dev_var);
}Here is, broadly, what my vf_myfilter_cu.cu looks like :
// my kernel function that does the work.
__global__ void myfunc(unsigned char *dev_var, size_t mysize) {
// find the offset for this particular GPU thread to process
// exit this function if the block/thread combo points to somewhere
// outside the frame
// make sure we're less than mysize bytes from the beginning of dev_var
// do things to dev_var[some_offset]
}
// Allocate the device memory
extern "C" void cudaMyInit(unsigned char **dev_var, size_t mysize) {
if(cudaMalloc( (void**) dev_var, mysize) != cudaSuccess) {
printf("Cannot allocate the memory\n");
}
}
// Free the device memory.
extern "C" void cudaMyUninit(unsigned char *dev_var) {
cudaFree(dev_var);
}
// Copy data from the host to the device,
// Call the kernel function, and
// Copy data from the device to the host.
extern "C" void cudaMyFunction(
unsigned char *host_var,
unsigned char *dev_var,
size_t mysize )
{
cudaError_t cres;
// dev_works is what I want to get rid of, but
// to make sure that there's not something more obvious going
// on, I made sure that my cudaMemcpy works if I'm allocating
// the device memory in every frame.
unsigned char *dev_works;
if(cudaMalloc( (void **) &dev_works, mysize)!=cudaSuccess) {
// I don't see this message
printf("failed at per-frame malloc\n");
}
// THIS PART WORKS, copying host_var to dev_works
cres=cudaMemcpy( (void *) dev_works, host_var, mysize, cudaMemcpyHostToDevice);
if(cres!=cudaSuccess) {
if(cres==cudaErrorInvalidValue) {
// I don't see this message.
printf("cudaErrorInvalidValue at per-frame cudaMemcpy\n");
}
}
// THIS PART FAILS, copying host_var to dev_var
cres=cudaMemcpy( (void *) dev_var, host_var, mysize, cudaMemcpyHostToDevice);
if(cres!=cudaSuccess) {
if(cres==cudaErrorInvalidValue) {
// this is the error code that prints.
printf("cudaErrorInvalidValue at per-frame cudaMemcpy\n");
}
// I check for other error codes, but they're not being hit.
}
// and this works with dev_works
myfunc<<>>(dev_works, mysize);
if(cudaMemcpy(host_var, dev_works, mysize, cudaMemcpyDeviceToHost)!=cudaSuccess) {
// I don't see this message.
printf("Failed to copy post-kernel func\n");
}
cudaFree(dev_works);
}Any ideas ?