Recherche avancée

Médias (91)

Autres articles (40)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

Sur d’autres sites (7970)

  • Of ctors and dtors

    18 février 2011, par Multimedia Mike — Programming, Sega Dreamcast

    I haven’t given up on the Sega Dreamcast programming. I was able to compile a bunch of homebrew code for the DC many years ago and I can’t make it work anymore. Again, I was working with a purpose-built, open source RTOS named KallistiOS (or KOS). I can make the programs compile but not run. I had ELF files left over from years ago which still executed. But when I tried to build new ELF files, no luck— the programs crashed before even reaching my main() function.

    I found the problem : ELF files are comprised of a number of sections and 2 of these sections are named ’.ctors’ and ’.dtors’ which stand for constructors and destructors. The KOS RTOS performs a manual traversal of .ctors section during program initialization and this is where things go bad. The traversal code doesn’t seem to account for a .ctors section that only contains a single entry. I commented out the function that does the traversal and programs started to work, at least until it was time to exit the program and return control to the program loader. That’s when the counterpart .dtors section traversal code ran and demonstrated the same problem. I’ll exhibit the problematic code at the end of this post.

    So I’m finally tinkering with Sega Dreamcast programming once again and with a slightly better grasp of software engineering than the first time I did this.

    Portable and Compatible C ?
    If nothing else, this low-level embedded stuff exposes you to some serious toolchain arcana, the likes of which you will likely never see working strictly in the desktop arena.

    Still, this exercise makes me wonder why C code from a decade ago doesn’t compile reliably now. Part of it is because gcc has gotten stricter about the syntax it will accept. In the case of this specific crashing problem, I suspect it comes down to a difference in the way the linker generates the final ELF file. I’ve written a list of items I have had to modify in the KOS codebase in order to get it to compile on more recent gcc versions. I wonder if it would be worth publishing the specifics, or if anyone would ever find the information useful ? Oh, who am I kidding ? Of course I’ll write it up, perhaps publish a new version of the code, if only because that’s the best chance I have of finding my own work again some years down the road.

    Problematic C Code
    See if this code makes any sense to you. It somehow traverse a list of 32-bit function pointers (in different directions, depending on constructors or destructors), executing each in turn. However, it appears to fall over if the list of pointers consists of a single entry.

    C :
    1. typedef void (*fptr)(void) ;
    2.  
    3. static fptr ctor_list[1] __attribute__((section(".ctors"))) = { (fptr) -1 } ;
    4. static fptr dtor_list[1] __attribute__((section(".dtors"))) = { (fptr) -1 } ;
    5.  
    6. /* Call this to execute all ctors */
    7. void arch_ctors() {
    8.     fptr *fpp ;
    9.  
    10.     /* Run up to the end of the list (defined by crtend) */
    11.     for (fpp=ctor_list + 1 ; *fpp != 0 ; ++fpp)
    12.          ;
    13.  
    14.     /* Now run the ctors backwards */
    15.     while (—fpp> ctor_list)
    16.         (**fpp)() ;
    17. }
    18.  
    19. /* Call this to execute all dtors */
    20. void arch_dtors() {
    21.     fptr *fpp ;
    22.  
    23.     /* Do the dtors forwards */
    24.     for (fpp=dtor_list + 1 ; *fpp != 0 ; ++fpp )
    25.         (**fpp)() ;
    26. }
  • Revision 200 : Added support for the —quiet option. Added audio tags checking. Fixed handling

    7 juin 2010, par marc.noirot

    Changed Paths :
     Modify /trunk/src/check.c


     Modify /trunk/src/flv.c


     Modify /trunk/src/flv.h



    Added support for the —quiet option.
    Added audio tags checking.
    Fixed handling of empty body tags in flv functions.
    Fixed check exit code in case the file has errors.

  • Multiprocess FATE Revisited

    26 juin 2010, par Multimedia Mike — FATE Server, Python

    I thought I had brainstormed a simple, elegant, multithreaded, deadlock-free refactoring for FATE in a previous post. However, I sort of glossed over the test ordering logic which I had not yet prototyped. The grim, possibly deadlock-afflicted reality is that the main thread needs to be notified as tests are completed. So, the main thread sends test specs through a queue to be executed by n tester threads and those threads send results to a results aggregator thread. Additionally, the results aggregator will need to send completed test IDs back to the main thread.



    But when I step back and look at the graph, I can’t rationalize why there should be a separate results aggregator thread. That was added to cut down on deadlock possibilities since the main thread and the tester threads would not be waiting for data from each other. Now that I’ve come to terms with the fact that the main and the testers need to exchange data in realtime, I think I can safely eliminate the result thread. Adding more threads is not the best way to guard against race conditions and deadlocks. Ask xine.



    I’m still hung up on the deadlock issue. I have these queues through which the threads communicate. At issue is the fact that they can cause a thread to block when inserting an item if the queue is "full". How full is full ? Immaterial ; seeking to answer such a question is not how you guard against race conditions. Rather, it seems to me that one side should be doing non-blocking queue operations.

    This is how I’m planning to revise the logic in the main thread :

    test_set = set of all tests to execute
    tests_pending = test_set
    tests_blocked = empty set
    tests_queue = multi-consumer queue to send test specs to tester threads
    results_queue = multi-producer queue through which tester threads send results
    while there are tests in tests_pending :
      pop a test from test_set
      if test depends on any tests that appear in tests_pending :
        add test to tests_blocked
      else :
        add test to tests_queue in a non-blocking manner
        if tests_queue is full, add test to tests_blocked
    

    while there are results in the results_queue :
    get a result from result_queue in non-blocking manner
    remove the corresponding test from tests_pending

    if tests_blocked is non-empty :
    sleep for 1 second
    test_set = tests_blocked
    tests_blocked = empty set
    else :
    insert n shutdown signals, one from each thread

    go to the top of the loop and repeat until there are no more tests

    while there are results in the results_queue :
    get a result from result_queue in a blocking manner

    Not mentioned in the pseudocode (so it doesn’t get too verbose) is logic to check whether the retrieved test result is actually an end-of-thread signal. These are accounted and the whole test process is done when one is received for each thread.

    On the tester thread side, it’s safe for them to do blocking test queue retrievals and blocking result queue insertions. The reason for the 1-second delay before resetting tests_blocked and looping again is because I want to guard against the situation where tests A and B are to be run, A depends of B running first, and while B is running (and happens to be a long encoding test), the main thread is spinning about, obsessively testing whether it’s time to insert A into the tests queue.

    It all sounds just crazy enough to work. In fact, I coded it up and it does work, sort of. The queue gets blocked pretty quickly. Instead of sleeping, I decided it’s better to perform the put operation using a 1-second timeout.

    Still, I’m paranoid about the precise operation of the IPC queue mechanism at work here. What happens if I try to stuff in a test spec that’s a bit too large ? Will the module take whatever I give it and serialize it through the queue as soon as it can ? I think an impromptu science project is in order.

    big-queue.py :

    PYTHON :
    1. # !/usr/bin/python
    2.  
    3. import multiprocessing
    4. import Queue
    5.  
    6. def f(q) :
    7.   str = q.get()
    8.   print "reader function got a string of %d characters" % (len(str))
    9.  
    10. q = multiprocessing.Queue()
    11. p = multiprocessing.Process(target=f, args=(q,))
    12. p.start()
    13. try :
    14.   q.put_nowait(’a’ * 100000000)
    15. except Queue.Full :
    16.   print "queue full"
    $ ./big-queue.py
    reader function got a string of 100000000 characters
    

    Since 100 MB doesn’t even make it choke, FATE’s little test specs shouldn’t pose any difficulty.