
Recherche avancée
Médias (91)
-
Valkaama DVD Cover Outside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Valkaama DVD Cover Inside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
1,000,000
27 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Demon Seed
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Four of Us are Dying
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (46)
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
Taille des images et des logos définissables
9 février 2011, parDans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)
Sur d’autres sites (3500)
-
Adventures in Unicode
Tangential to multimedia hacking is proper metadata handling. Recently, I have gathered an interest in processing a large corpus of multimedia files which are likely to contain metadata strings which do not fall into the lower ASCII set. This is significant because the lower ASCII set intersects perfectly with my own programming comfort zone. Indeed, all of my programming life, I have insisted on covering my ears and loudly asserting “LA LA LA LA LA ! ALL TEXT EVERYWHERE IS ASCII !” I suspect I’m not alone in this.
Thus, I took this as an opportunity to conquer my longstanding fear of Unicode. I developed a self-learning course comprised of a series of exercises which add up to this diagram :
Part 1 : Understanding Text Encoding
Python has regular strings by default and then it has Unicode strings. The latter are prefixed by the letter ‘u’. This is what ‘ö’ looks like encoded in each type.-
>>> ’ö’, u’ö’
-
(’\xc3\xb6’, u’\xf6’)
A large part of my frustration with Unicode comes from Python yelling at me about UnicodeDecodeErrors and an inability to handle the number 0xc3 for some reason. This usually comes when I’m trying to wrap my head around an unrelated problem and don’t care to get sidetracked by text encoding issues. However, when I studied the above output, I finally understood where the 0xc3 comes from. I just didn’t understand what the encoding represents exactly.
I can see from assorted tables that ‘ö’ is character 0xF6 in various encodings (in Unicode and Latin-1), so u’\xf6′ makes sense. But what does ‘\xc3\xb6′ mean ? It’s my style to excavate straight down to the lowest levels, and I wanted to understand exactly how characters are represented in memory. The UTF-8 encoding tables inform us that any Unicode code point above 0x7F but less than 0×800 will be encoded with 2 bytes :
110xxxxx 10xxxxxx
Applying this pattern to the \xc3\xb6 encoding :
hex : 0xc3 0xb6 bits : 11000011 10110110 important bits : ---00011 —110110 assembled : 00011110110 code point : 0xf6
I was elated when I drew that out and made the connection. Maybe I’m the last programmer to figure this stuff out. But I’m still happy that I actually understand those Python errors pertaining to the number 0xc3 and that I won’t have to apply canned solutions without understanding the core problem.
I’m cheating on this part of this exercise just a little bit since the diagram implied that the Unicode text needs to come from a binary file. I’ll return to that in a bit. For now, I’ll just contrive the following Unicode string from the Python REPL :
-
>>> u = u’Üñìçôđé’
-
>>> u
-
u’\xdc\xf1\xec\xe7\xf4\u0111\xe9’
Part 2 : From Python To SQLite3
The next step is to see what happens when I use Python’s SQLite3 module to dump the string into a new database. Will the Unicode encoding be preserved on disk ? What will UTF-8 look like on disk anyway ?-
>>> import sqlite3
-
>>> conn = sqlite3.connect(’unicode.db’)
-
>>> conn.execute("CREATE TABLE t (t text)")
-
>>> conn.execute("INSERT INTO t VALUES (?)", (u, ))
-
>>> conn.commit()
-
>>> conn.close()
Next, I manually view the resulting database file (unicode.db) using a hex editor and look for strings. Here we go :
000007F0 02 29 C3 9C C3 B1 C3 AC C3 A7 C3 B4 C4 91 C3 A9
Look at that ! It’s just like the \xc3\xf6 encoding we see in the regular Python strings.
Part 3 : From SQLite3 To A Web Page Via PHP
Finally, use PHP (love it or hate it, but it’s what’s most convenient on my hosting provider) to query the string from the database and display it on a web page, completing the outlined processing pipeline.-
< ?php
-
$dbh = new PDO("sqlite:unicode.db") ;
-
foreach ($dbh->query("SELECT t from t") as $row) ;
-
$unicode_string = $row[’t’] ;
-
?>
-
-
<html>
-
<head><meta http-equiv="Content-Type" content="text/html ; charset=utf-8"></meta></head>
-
<body><h1>< ?=$unicode_string ?></h1></body>
-
</html>
I tested the foregoing PHP script on 3 separate browsers that I had handy (Firefox, Internet Explorer, and Chrome) :
I’d say that counts as success ! It’s important to note that the “meta http-equiv” tag is absolutely necessary. Omit and see something like this :
Since we know what the UTF-8 stream looks like, it’s pretty obvious how the mapping is operating here : 0xc3 and 0xc4 correspond to ‘Ã’ and ‘Ä’, respectively. This corresponds to an encoding named ISO/IEC 8859-1, a.k.a. Latin-1. Speaking of which…
Part 4 : Converting Binary Data To Unicode
At the start of the experiment, I was trying to extract metadata strings from these binary multimedia files and I noticed characters like our friend ‘ö’ from above. In the bytestream, this was represented simply with 0xf6. I mistakenly believed that this was the on-disk representation of UTF-8. Wrong. Turns out it’s Latin-1.However, I still need to solve the problem of transforming such strings into Unicode to be shoved through the pipeline diagrammed above. For this experiment, I created a 9-byte file with the Latin-1 string ‘Üñìçôdé’ couched by 0′s, to simulate yanking a string out of a binary file. Here’s unicode.file :
00000000 00 DC F1 EC E7 F4 64 E9 00 ......d..
(Aside : this experiment uses plain ‘d’ since the ‘đ’ with a bar through it doesn’t occur in Latin-1 ; shows up all over the place in Vietnamese, at least.)
I’ve been mashing around Python code via the REPL, trying to get this string into a Unicode-friendly format. This is a successful method but it’s probably not the best :
-
>>> import struct
-
>>> f = open(’unicode.file’, ’r’).read()
-
>>> u = u’’
-
>>> for c in struct.unpack("B"*7, f[1 :8]) :
-
... u += unichr(c)
-
...
-
>>> u
-
u’\xdc\xf1\xec\xe7\xf4d\xe9’
-
>>> print u
-
Üñìçôdé
Conclusion
Dealing with text encoding matters reminds me of dealing with integer endian-ness concerns. When you’re just dealing with one system, you probably don’t need to think too much about it because the system is usually handling everything consistently underneath the covers.However, when the data leaves one system and will be interpreted by another system, that’s when a programmer needs to be cognizant of matters such as integer endianness or text encoding.
-
-
Problems with Streaming a Multicast RTSP Stream with Live555
16 juin 2014, par ALM865I am having trouble setting up a Multicast RTSP session using Live555. The examples included with Live555 are mostly irrelevant as they deal with reading in files and my code differs because it reads in encoded frames generated from a FFMPEG thread within my own program (no pipes, no saving to disk, it is genuinely passing pointers to memory that contain the encoded frames for Live555 to stream).
My Live555 project that uses a custom Server Media Subsession so that I can receive data from an FFMPEG thread within my program (instead of Live555’s default reading from a file, yuk !). This is a requirement of my program as it reads in a GigEVision stream in one thread, sends the decoded raw RGB packets to the FFMPEG thread, which then in turn sends the encoded frames off to Live555 for RTSP streaming.
For the life of me I can’t work out how to send the RTSP stream as multicast instead of unicast !
Just a note, my program works perfectly at the moment streaming Unicast, so there is nothing wrong with my Live555 implementation (before you go crazy picking out irrelevant errors !). I just need to know how to modify my existing code to stream Multicast instead of Unicast.
My program is way too big to upload and share so I’m just going to share the important bits :
Live_AnalysingServerMediaSubsession.h
#ifndef _ANALYSING_SERVER_MEDIA_SUBSESSION_HH
#define _ANALYSING_SERVER_MEDIA_SUBSESSION_HH
#include
#include "Live_AnalyserInput.h"
class AnalysingServerMediaSubsession: public OnDemandServerMediaSubsession {
public:
static AnalysingServerMediaSubsession*
createNew(UsageEnvironment& env, AnalyserInput& analyserInput, unsigned estimatedBitrate,
Boolean iFramesOnly = False,
double vshPeriod = 5.0
/* how often (in seconds) to inject a Video_Sequence_Header,
if one doesn't already appear in the stream */);
protected: // we're a virtual base class
AnalysingServerMediaSubsession(UsageEnvironment& env, AnalyserInput& AnalyserInput, unsigned estimatedBitrate, Boolean iFramesOnly, double vshPeriod);
virtual ~AnalysingServerMediaSubsession();
protected:
AnalyserInput& fAnalyserInput;
unsigned fEstimatedKbps;
private:
Boolean fIFramesOnly;
double fVSHPeriod;
// redefined virtual functions
virtual FramedSource* createNewStreamSource(unsigned clientSessionId, unsigned& estBitrate);
virtual RTPSink* createNewRTPSink(Groupsock* rtpGroupsock, unsigned char rtpPayloadTypeIfDynamic, FramedSource* inputSource);
};
#endifAnd "Live_AnalysingServerMediaSubsession.cpp"
#include "Live_AnalysingServerMediaSubsession.h"
#include
#include
#include
AnalysingServerMediaSubsession* AnalysingServerMediaSubsession::createNew(UsageEnvironment& env, AnalyserInput& wisInput, unsigned estimatedBitrate,
Boolean iFramesOnly,
double vshPeriod) {
return new AnalysingServerMediaSubsession(env, wisInput, estimatedBitrate,
iFramesOnly, vshPeriod);
}
AnalysingServerMediaSubsession
::AnalysingServerMediaSubsession(UsageEnvironment& env, AnalyserInput& analyserInput, unsigned estimatedBitrate, Boolean iFramesOnly, double vshPeriod)
: OnDemandServerMediaSubsession(env, True /*reuse the first source*/),
fAnalyserInput(analyserInput), fIFramesOnly(iFramesOnly), fVSHPeriod(vshPeriod) {
fEstimatedKbps = (estimatedBitrate + 500)/1000;
}
AnalysingServerMediaSubsession
::~AnalysingServerMediaSubsession() {
}
FramedSource* AnalysingServerMediaSubsession ::createNewStreamSource(unsigned /*clientSessionId*/, unsigned& estBitrate) {
estBitrate = fEstimatedKbps;
// Create a framer for the Video Elementary Stream:
//LOG_MSG("Create Net Stream Source [%d]", estBitrate);
return MPEG1or2VideoStreamDiscreteFramer::createNew(envir(), fAnalyserInput.videoSource());
}
RTPSink* AnalysingServerMediaSubsession ::createNewRTPSink(Groupsock* rtpGroupsock, unsigned char /*rtpPayloadTypeIfDynamic*/, FramedSource* /*inputSource*/) {
setVideoRTPSinkBufferSize();
/*
struct in_addr destinationAddress;
destinationAddress.s_addr = inet_addr("239.255.12.42");
rtpGroupsock->addDestination(destinationAddress,8888);
rtpGroupsock->multicastSendOnly();
*/
return MPEG1or2VideoRTPSink::createNew(envir(), rtpGroupsock);
}Live_AnalyserSouce.h
#ifndef _ANALYSER_SOURCE_HH
#define _ANALYSER_SOURCE_HH
#ifndef _FRAMED_SOURCE_HH
#include "FramedSource.hh"
#endif
class FFMPEG;
// The following class can be used to define specific encoder parameters
class AnalyserParameters {
public:
FFMPEG * Encoding_Source;
};
class AnalyserSource: public FramedSource {
public:
static AnalyserSource* createNew(UsageEnvironment& env, FFMPEG * E_Source);
static unsigned GetRefCount();
public:
static EventTriggerId eventTriggerId;
protected:
AnalyserSource(UsageEnvironment& env, FFMPEG * E_Source);
// called only by createNew(), or by subclass constructors
virtual ~AnalyserSource();
private:
// redefined virtual functions:
virtual void doGetNextFrame();
private:
static void deliverFrame0(void* clientData);
void deliverFrame();
private:
static unsigned referenceCount; // used to count how many instances of this class currently exist
FFMPEG * Encoding_Source;
unsigned int Last_Sent_Frame_ID;
};
#endifLive_AnalyserSource.cpp
#include "Live_AnalyserSource.h"
#include // for "gettimeofday()"
#include "FFMPEGClass.h"
AnalyserSource* AnalyserSource::createNew(UsageEnvironment& env, FFMPEG * E_Source) {
return new AnalyserSource(env, E_Source);
}
EventTriggerId AnalyserSource::eventTriggerId = 0;
unsigned AnalyserSource::referenceCount = 0;
AnalyserSource::AnalyserSource(UsageEnvironment& env, FFMPEG * E_Source) : FramedSource(env), Encoding_Source(E_Source) {
if (referenceCount == 0) {
// Any global initialization of the device would be done here:
}
++referenceCount;
// Any instance-specific initialization of the device would be done here:
Last_Sent_Frame_ID = 0;
/* register us with the Encoding thread so we'll get notices when new frame data turns up.. */
Encoding_Source->RegisterRTSP_Source(&(env.taskScheduler()), this);
// We arrange here for our "deliverFrame" member function to be called
// whenever the next frame of data becomes available from the device.
//
// If the device can be accessed as a readable socket, then one easy way to do this is using a call to
// envir().taskScheduler().turnOnBackgroundReadHandling( ... )
// (See examples of this call in the "liveMedia" directory.)
//
// If, however, the device *cannot* be accessed as a readable socket, then instead we can implement is using 'event triggers':
// Create an 'event trigger' for this device (if it hasn't already been done):
if (eventTriggerId == 0) {
eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0);
}
}
AnalyserSource::~AnalyserSource() {
// Any instance-specific 'destruction' (i.e., resetting) of the device would be done here:
/* de-register this source from the Encoding thread, since we no longer need notices.. */
Encoding_Source->Un_RegisterRTSP_Source(this);
--referenceCount;
if (referenceCount == 0) {
// Any global 'destruction' (i.e., resetting) of the device would be done here:
// Reclaim our 'event trigger'
envir().taskScheduler().deleteEventTrigger(eventTriggerId);
eventTriggerId = 0;
}
}
unsigned AnalyserSource::GetRefCount() {
return referenceCount;
}
void AnalyserSource::doGetNextFrame() {
// This function is called (by our 'downstream' object) when it asks for new data.
//LOG_MSG("Do Next Frame..");
// Note: If, for some reason, the source device stops being readable (e.g., it gets closed), then you do the following:
//if (0 /* the source stops being readable */ /*%%% TO BE WRITTEN %%%*/) {
unsigned int FrameID = Encoding_Source->GetFrameID();
if (FrameID == 0){
//LOG_MSG("No Data. Close");
handleClosure(this);
return;
}
// If a new frame of data is immediately available to be delivered, then do this now:
if (Last_Sent_Frame_ID != FrameID){
deliverFrame();
//DEBUG_MSG("Frame ID: %d",FrameID);
}
// No new data is immediately available to be delivered. We don't do anything more here.
// Instead, our event trigger must be called (e.g., from a separate thread) when new data becomes available.
}
void AnalyserSource::deliverFrame0(void* clientData) {
((AnalyserSource*)clientData)->deliverFrame();
}
void AnalyserSource::deliverFrame() {
if (!isCurrentlyAwaitingData()) return; // we're not ready for the data yet
static u_int8_t* newFrameDataStart;
static unsigned newFrameSize = 0;
/* get the data frame from the Encoding thread.. */
if (Encoding_Source->GetFrame(&newFrameDataStart, &newFrameSize, &Last_Sent_Frame_ID)){
if (newFrameDataStart!=NULL) {
/* This should never happen, but check anyway.. */
if (newFrameSize > fMaxSize) {
fFrameSize = fMaxSize;
fNumTruncatedBytes = newFrameSize - fMaxSize;
} else {
fFrameSize = newFrameSize;
}
gettimeofday(&fPresentationTime, NULL); // If you have a more accurate time - e.g., from an encoder - then use that instead.
// If the device is *not* a 'live source' (e.g., it comes instead from a file or buffer), then set "fDurationInMicroseconds" here.
/* move the data to be sent off.. */
memmove(fTo, newFrameDataStart, fFrameSize);
/* release the Mutex we had on the Frame's buffer.. */
Encoding_Source->ReleaseFrame();
}
else {
//AM Added, something bad happened
//ALTRACE("LIVE555: FRAME NULL\n");
fFrameSize=0;
fTo=NULL;
handleClosure(this);
}
}
else {
//LOG_MSG("Closing Connection due to Frame Error..");
handleClosure(this);
}
// After delivering the data, inform the reader that it is now available:
FramedSource::afterGetting(this);
}Live_AnalyserInput.cpp
#include "Live_AnalyserInput.h"
#include "Live_AnalyserSource.h"
////////// WISInput implementation //////////
AnalyserInput* AnalyserInput::createNew(UsageEnvironment& env, FFMPEG *Encoder) {
if (!fHaveInitialized) {
//if (!initialize(env)) return NULL;
fHaveInitialized = True;
}
return new AnalyserInput(env, Encoder);
}
FramedSource* AnalyserInput::videoSource() {
if (fOurVideoSource == NULL || AnalyserSource::GetRefCount() == 0) {
fOurVideoSource = AnalyserSource::createNew(envir(), m_Encoder);
}
return fOurVideoSource;
}
AnalyserInput::AnalyserInput(UsageEnvironment& env, FFMPEG *Encoder): Medium(env), m_Encoder(Encoder) {
}
AnalyserInput::~AnalyserInput() {
/* When we get destroyed, make sure our source is also destroyed.. */
if (fOurVideoSource != NULL && AnalyserSource::GetRefCount() != 0) {
AnalyserSource::handleClosure(fOurVideoSource);
}
}
Boolean AnalyserInput::fHaveInitialized = False;
int AnalyserInput::fOurVideoFileNo = -1;
FramedSource* AnalyserInput::fOurVideoSource = NULL;Live_AnalyserInput.h
#ifndef _ANALYSER_INPUT_HH
#define _ANALYSER_INPUT_HH
#include
#include "FFMPEGClass.h"
class AnalyserInput: public Medium {
public:
static AnalyserInput* createNew(UsageEnvironment& env, FFMPEG *Encoder);
FramedSource* videoSource();
private:
AnalyserInput(UsageEnvironment& env, FFMPEG *Encoder); // called only by createNew()
virtual ~AnalyserInput();
private:
friend class WISVideoOpenFileSource;
static Boolean fHaveInitialized;
static int fOurVideoFileNo;
static FramedSource* fOurVideoSource;
FFMPEG *m_Encoder;
};
// Functions to set the optimal buffer size for RTP sink objects.
// These should be called before each RTPSink is created.
#define VIDEO_MAX_FRAME_SIZE 300000
inline void setVideoRTPSinkBufferSize() { OutPacketBuffer::maxSize = VIDEO_MAX_FRAME_SIZE; }
#endifAnd finally the relevant code from my Live555 worker thread that starts the whole process :
Stop_RTSP_Loop=0;
// MediaSession *ms;
TaskScheduler *scheduler;
UsageEnvironment *env ;
// RTSPClient *rtsp;
// MediaSubsession *Video_Sub;
char RTSP_Address[1024];
RTSP_Address[0]=0x00;
if (m_Encoder == NULL){
//DEBUG_MSG("No Video Encoder registered for the RTSP Encoder");
return 0;
}
scheduler = BasicTaskScheduler::createNew();
env = BasicUsageEnvironment::createNew(*scheduler);
UserAuthenticationDatabase* authDB = NULL;
#ifdef ACCESS_CONTROL
// To implement client access control to the RTSP server, do the following:
if (m_Enable_Pass){
authDB = new UserAuthenticationDatabase;
authDB->addUserRecord(UserN, PassW);
}
////////// authDB = new UserAuthenticationDatabase;
////////// authDB->addUserRecord((char*)"Admin", (char*)"Admin"); // replace these with real strings
// Repeat the above with each <username>, <password> that you wish to allow
// access to the server.
#endif
// Create the RTSP server:
RTSPServer* rtspServer = RTSPServer::createNew(*env, 554, authDB);
ServerMediaSession* sms;
AnalyserInput* inputDevice;
if (rtspServer == NULL) {
TRACE("LIVE555: Failed to create RTSP server: %s\n", env->getResultMsg());
return 0;
}
else {
char const* descriptionString = "Session streamed by \"IMC Server\"";
// Initialize the WIS input device:
inputDevice = AnalyserInput::createNew(*env, m_Encoder);
if (inputDevice == NULL) {
TRACE("Live555: Failed to create WIS input device\n");
return 0;
}
else {
// A MPEG-1 or 2 video elementary stream:
/* Increase the buffer size so we can handle the high res stream.. */
OutPacketBuffer::maxSize = 300000;
// NOTE: This *must* be a Video Elementary Stream; not a Program Stream
sms = ServerMediaSession::createNew(*env, RTSP_Address, RTSP_Address, descriptionString);
//sms->addSubsession(MPEG1or2VideoFileServerMediaSubsession::createNew(*env, inputFileName, reuseFirstSource, iFramesOnly));
sms->addSubsession(AnalysingServerMediaSubsession::createNew(*env, *inputDevice, m_Encoder->Get_Bitrate()));
//sms->addSubsession(WISMPEG1or2VideoServerMediaSubsession::createNew(sms->envir(), inputDevice, videoBitrate));
rtspServer->addServerMediaSession(sms);
//announceStream(rtspServer, sms, streamName, inputFileName);
//LOG_MSG("Play this stream using the URL %s", rtspServer->rtspURL(sms));
}
}
Stop_RTSP_Loop=0;
for (;;)
{
/* The actual work is all carried out inside the LIVE555 Task scheduler */
env->taskScheduler().doEventLoop(&Stop_RTSP_Loop); // does not return
if (mStop) {
break;
}
}
Medium::close(rtspServer); // will also reclaim "sms" and its "ServerMediaSubsession"s
Medium::close(inputDevice);
</password></username> -
How to Choose the Optimal Multi-Touch Attribution Model for Your Organisation
13 mars 2023, par Erin — Analytics Tips