
Recherche avancée
Médias (3)
-
Valkaama DVD Cover Outside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Valkaama DVD Cover Inside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
Autres articles (12)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...) -
Initialisation de MediaSPIP (préconfiguration)
20 février 2010, parLors de l’installation de MediaSPIP, celui-ci est préconfiguré pour les usages les plus fréquents.
Cette préconfiguration est réalisée par un plugin activé par défaut et non désactivable appelé MediaSPIP Init.
Ce plugin sert à préconfigurer de manière correcte chaque instance de MediaSPIP. Il doit donc être placé dans le dossier plugins-dist/ du site ou de la ferme pour être installé par défaut avant de pouvoir utiliser le site.
Dans un premier temps il active ou désactive des options de SPIP qui ne le (...)
Sur d’autres sites (2331)
-
Android streaming screen [closed]
21 mai 2013, par blganeshIm able to share screen via ffmpeg
./ffmpeg -f fbdev -r 24 -i /dev/graphics/fb0 http://localhost:8090/feed1.ffm
But the output live stream is very slow.
Following is the conf file which I'm using.`Port 8090
RTSPPort 7654
BindAddress 0.0.0.0
RTSPBindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 1000
CustomLog -
NoDaemon
<feed>
File /data/live1.ffm
FileMaxSize 40M
NoAudio
ACL allow 127.0.0.1
</feed>
<stream>
Feed live1.ffm
Format mpeg2video
NoAudio
VideoBitRate 1024
VideoFrameRate 1
VideoBufferSize 10000
VideoSize 480x800
VideoQMin 1
VideoQMax 15
</stream>`Kindly let me know how should I change my conf file to get a fast video output.
-
Method For Crawling Google
28 mai 2011, par Multimedia Mike — Big DataI wanted to crawl Google in order to harvest a large corpus of certain types of data as yielded by a certain search term (we’ll call it “term” for this exercise). Google doesn’t appear to offer any API to automatically harvest their search results (why would they ?). So I sat down and thought about how to do it. This is the solution I came up with.
FAQ
Q : Is this legal / ethical / compliant with Google’s terms of service ?
A : Does it look like I care ? Moving right along…Manual Crawling Process
For this exercise, I essentially automated the task that would be performed by a human. It goes something like this :- Search for “term”
- On the first page of results, download each of the 10 results returned
- Click on the next page of results
- Go to step 2, until Google doesn’t return anymore pages of search results
Google returns up to 1000 results for a given search term. Fetching them 10 at a time is less than efficient. Fortunately, the search URL can easily be tweaked to return up to 100 results per page.
Expanding Reach
Problem : 1000 results for the “term” search isn’t that many. I need a way to expand the search. I’m not aiming for relevancy ; I’m just searching for random examples of some data that occurs around the internet.My solution for this is to refine the search using the “site” wildcard. For example, you can ask Google to search for “term” at all Canadian domains using “site :.ca”. So, the manual process now involves harvesting up to 1000 results for every single internet top level domain (TLD). But many TLDs can be more granular than that. For example, there are 50 sub-domains under .us, one for each state (e.g., .ca.us, .ny.us). Those all need to be searched independently. Same for all the sub-domains under TLDs which don’t allow domains under the main TLD, such as .uk (search under .co.uk, .ac.uk, etc.).
Another extension is to combine “term” searches with other terms that are likely to have a rich correlation with “term”. For example, if “term” is relevant to various scientific fields, search for “term” in conjunction with various scientific disciplines.
Algorithmically
My solution is to create an SQLite database that contains a table of search seeds. Each seed is essentially a “site :” string combined with a starting index.Each TLD and sub-TLD is inserted as a searchseed record with a starting index of 0.
A script performs the following crawling algorithm :
- Fetch the next record from the searchseed table which has not been crawled
- Fetch search result page from Google
- Scrape URLs from page and insert each into URL table
- Mark the searchseed record as having been crawled
- If the results page indicates there are more results for this search, insert a new searchseed for the same seed but with a starting index 100 higher
Digging Into Sites
Sometimes, Google notes that certain sites are particularly rich sources of “term” and offers to let you search that site for “term”. This basically links to another search for ‘term site:somesite”. That site gets its own search seed and the program might harvest up to 1000 URLs from that site alone.Harvesting the Data
Armed with a database of URLs, employ the following algorithm :- Fetch a random URL from the database which has yet to be downloaded
- Try to download it
- For goodness sake, have a mechanism in place to detect whether the download process has stalled and automatically kill it after a certain period of time
- Store the data and update the database, noting where the information was stored and that it is already downloaded
This step is easy to parallelize by simply executing multiple copies of the script. It is useful to update the URL table to indicate that one process is already trying to download a URL so multiple processes don’t duplicate work.
Acting Human
A few factors here :- Google allegedly doesn’t like automated programs crawling its search results. Thus, at the very least, don’t let your script advertise itself as an automated program. At a basic level, this means forging the User-Agent : HTTP header. By default, Python’s urllib2 will identify itself as a programming language. Change this to a well-known browser string.
- Be patient ; don’t fire off these search requests as quickly as possible. My crawling algorithm inserts a random delay of a few seconds in between each request. This can still yield hundreds of useful URLs per minute.
- On harvesting the data : Even though you can parallelize this and download data as quickly as your connection can handle, it’s a good idea to randomize the URLs. If you hypothetically had 4 download processes running at once and they got to a point in the URL table which had many URLs from a single site, the server might be configured to reject too many simultaneous requests from a single client.
Conclusion
Anyway, that’s just the way I would (and did) do it. What did I do with all the data ? That’s a subject for a different post. -
FFMPEG SCREENSHOT ERROR : No such filter : 'tile' [closed]
22 mai 2013, par itseasy21i have been trying on making multiple screenshots from a video file using ffmpeg and i succeed too in command but the only problem is while executing that i am getting this error :
No such filter: 'tile'
Error opening filters!The command i execute is :
ffmpeg -ss 00:00:10 -i './tmp/try.avi' -vcodec mjpeg -vframes 1 -vf 'select=not(mod(n\,1000)),scale=320:240,tile=2x3' './tmp/try.jpg'
any solution for this ????