Recherche avancée

Médias (0)

Mot : - Tags -/interaction

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (110)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Les images

    15 mai 2013

Sur d’autres sites (8352)

  • Elacarte Presto Tablets

    14 mars 2013, par Multimedia Mike — General

    I visited an Applebee’s restaurant this past weekend. The first thing I spied was a family at a table with what looked like a 7-inch tablet. It’s not an uncommon sight. However, as I moved through the restaurant, I noticed that every single table was equipped with such a tablet. It looked like this :


    ELaCarte's Presto Tablet

    For a computer nerd like me, you could probably guess that I was be far more interested in this gadget than the cuisine. The thing said “Presto” on the front and “Elacarte” on the back. Putting this together, we get the website of Elacarte, the purveyors of this restaurant tablet technology. Months after the iPad was released on 2010, I remember stories about high-end restaurants showing their wine list via iPads. This tablet goes well beyond that.

    How was it ? Well, confusing, mostly. The hostess told us we could order through the tablet or through her. Since we already knew what we wanted, she just manually took our order and presumably entered it into the system. So, right away, the question is : Do we order through a human or through a computer ? Or a combination ? Do we have to use the tablet if we don’t want to ?

    Hardware
    When picking up the tablet, it’s hard not to notice that it is very heavy. At first, I suspected that it was deliberately weighted down as some minor attempt at an anti-theft measure. But then I remembered what I know about power budgets of phones and tablets– powering the screen accounts for much of the battery usage. I realized that this device needs to drive the screen for about 14 continuous hours each day. I.e., the weight must come from a massive battery.

    The screen is good. It’s a capacitive touchscreen, so nice and responsive. When I first spied the device, I felt certain it would be a resistive touchscreen (which is more accurately called a touch-and-press-down screen). There is an AC adapter on the side of the tablet. This is the only interface to the device :


    ELaCarte Presto Tablet -- view of adapter

    That looks to me like an internal SATA connector (different from an eSATA connector). Foolishly, I didn’t have a SATA cable on me so I couldn’t verify.

    User Interface
    The interface options are : Order, Games, Neighborhood, and Pay. One big benefit of accessing the menu through the Order option is that each menu item can have a picture. For people who order more by picture than text description, this is useful. Rather, it would be, if more items had pictures. I’m not sure there were more pictures than seen in the print menu.

    For Games, there were a variety of party games. The interface clearly stated that we got to play 2 free games. This implied to me that further games cost money. We tried one game briefly and the food came.

    2 more options : Neighborhood– I know I dug into this option, but I forget what it was. Maybe it discussed local attractions. Finally, Pay. This thing has an integrated credit card reader. There is no integrated printer, though, so if you want one, you will have to request one from a human.

    Experience
    So we ordered through a human since we didn’t feel like being thrust into this new paradigm when we just wanted lunch. The staff was obviously amenable to that. However, I got a chance to ask them a lot of questions about the particulars. Apparently, they have had this system for about 5 months. It was confirmed that the tablets do, in fact, have gargantuan batteries that have to last through the restaurant’s entire business hours. Do they need to be charged every night ? Yes, they do. But how ? The staff described this several large charging blocks with many cables sprouting out. Reportedly, some units still don’t make it through the entire day.

    When it was time to pay, I pressed the Pay button on the interface. The bill I saw had nothing in common with what we ordered (actually, it was cheaper, so perhaps I should have just accepted it). But I pointed it out to a human and they said that this happens sometimes. So they manually printed my bill. There was a dollar charge for the game that was supposed to be free. I pointed this out and they removed it. It’s minor, I know, but it’s still worth trying to work out these bugs.

    One of the staff also described how a restaurant doesn’t need to employ as many people thanks to the tablet. She gave a nervous, awkward, self-conscious laugh when she said this. All I could think of was this Dilbert comic strip in which the boss realizes that his smartphone could perform certain key functions previously handled by his assistant.

    Not A New Idea
    Some people might think this is a totally new concept. It’s not. I was immediately reminded of my university days in Boulder, Colorado, USA, circa 1997. The local Taco Bell and Arby’s restaurants both had touchscreen ordering kiosks. Step up, interact with the (probably resistive) touchscreen, get a number, and step to the counter to change money, get your food, and probably clarify your order because there is only so much that can be handled through a touchscreen.

    What I also remember is when they tore out those ordering kiosks, also circa 1997. I don’t know the exact reason. Maybe people didn’t like them. Maybe there were maintenance costs that made them not worth the hassle.

    Then there are the widespread self-checkout lanes in grocery stores. Personally, I like those, though I know many don’t. However, this restaurant tablet thing hasn’t won me over yet. What’s the difference ? Perhaps that automated lanes at grocery stores require zero external assistance– at least, if you do everything correctly. Personally, I work well with these lanes because I can pretty much guess the constraints of the system and I am careful not to confuse the computer in any way. Until they deploy serving droids, or at least food conveyors, there still needs to be some human interaction and I think the division between the human and computer roles is unintuitive in the restaurant case.

    I don’t really care to return to the same restaurant. I’ll likely avoid any other restaurant that has these tablets. For some reason, I think I’m probably supposed to be the ideal consumer of this concept. But the idea will probably perform all right anyway. Elacarte’s website has plenty of graphs demonstrating that deploying these tablets is extremely profitable.

  • Finding Optimal Code Coverage

    7 mars 2012, par Multimedia Mike — Programming

    A few months ago, I published a procedure for analyzing code coverage of the test suites exercised in FFmpeg and Libav. I used it to add some more tests and I have it on good authority that it has helped other developers fill in some gaps as well (beginning with students helping out with the projects as part of the Google Code-In program). Now I’m wondering about ways to do better.

    Current Process
    When adding a test that depends on a sample (like a demuxer or decoder test), it’s ideal to add a sample that’s A) small, and B) exercises as much of the codebase as possible. When I was studying code coverage statistics for the WC4-Xan video decoder, I noticed that the sample didn’t exercise one of the 2 possible frame types. So I scouted samples until I found one that covered both types, trimmed the sample down, and updated the coverage suite.

    I started wondering about a method for finding the optimal test sample for a given piece of code, one that exercises every code path in a module. Okay, so that’s foolhardy in the vast majority of cases (although I was able to add one test spec that pushed a module’s code coverage from 0% all the way to 100% — but the module in question only had 2 exercisable lines). Still, given a large enough corpus of samples, how can I find the smallest set of samples that exercise the complete codebase ?

    This almost sounds like an NP-complete problem. But why should that stop me from trying to find a solution ?

    Science Project
    Here’s the pitch :

    • Instrument FFmpeg with code coverage support
    • Download lots of media to exercise a particular module
    • Run FFmpeg against each sample and log code coverage statistics
    • Distill the resulting data in some meaningful way in order to obtain more optimal code coverage

    That first step sounds harsh– downloading lots and lots of media. Fortunately, there is at least one multimedia format in the projects that tends to be extremely small : ANSI. These are files that are designed to display elaborate scrolling graphics using text mode. Further, the FATE sample currently deployed for this test (TRE_IOM5.ANS) only exercises a little less than 50% of the code in libavcodec/ansi.c. I believe this makes the ANSI video decoder a good candidate for this experiment.

    Procedure
    First, find a site that hosts a lot ANSI files. Hi, sixteencolors.net. This site has lots (on the order of 4000) artpacks, which are ZIP archives that contain multiple ANSI files (and sometimes some other files). I scraped a list of all the artpack names.

    In an effort to be responsible, I randomized the list of artpacks and downloaded periodically and with limited bandwidth ('wget --limit-rate=20k').

    Run ‘gcov’ on ansi.c in order to gather the full set of line numbers to be covered.

    For each artpack, unpack the contents, run the instrumented FFmpeg on each file inside, run ‘gcov’ on ansi.c, and log statistics including the file’s size, the file’s location (artpack.zip:filename), and a comma-separated list of line numbers touched.

    Definition of ‘Optimal’
    The foregoing procedure worked and yielded useful, raw data. Now I have to figure out how to analyze it.

    I think it’s most desirable to have the smallest files (in terms of bytes) that exercise the most lines of code. To that end, I sorted the results by filesize, ascending. A Python script initializes a set of all exercisable line numbers in ansi.c, then iterates through each each file’s stats line, adding the file to the list of candidate samples if its set of exercised lines can remove any line numbers from the overall set of lines. Ideally, that set of lines should devolve to an empty set.

    I think a second possible approach is to find the single sample that exercises the most code and then proceed with the previously described method.

    Initial Results
    So far, I have analyzed 13324 samples from 357 different artpacks provided by sixteencolors.net.

    Using the first method, I can find a set of samples that covers nearly 80% of ansi.c :

    <br />
    0 bytes: bad-0494.zip:5<br />
    1 bytes: grip1293.zip:-ANSI---.---<br />
    1 bytes: pur-0794.zip:.<br />
    2 bytes: awe9706.zip:-ANSI───.───<br />
    61 bytes: echo0197.zip:-(ART)-<br />
    62 bytes: hx03.zip:HX005.DAT<br />
    76 bytes: imp-0494.zip:IMPVIEW.CFG<br />
    82 bytes: ice0010b.zip:_cont'd_.___<br />
    101 bytes: bdp-0696.zip:BDP2.WAD<br />
    112 bytes: plain12.zip:--------.---<br />
    181 bytes: ins1295v.zip:-°VGA°-.  н<br />
    219 bytes: purg-22.zip:NEM-SHIT.ASC<br />
    289 bytes: srg1196.zip:HOWTOREQ.JNK<br />
    315 bytes: karma-04.zip:FASHION.COM<br />
    318 bytes: buzina9.zip:ox-rmzzy.ans<br />
    411 bytes: solo1195.zip:FU-BLAH1.RIP<br />
    621 bytes: ciapak14.zip:NA-APOC1.ASC<br />
    951 bytes: lght9404.zip:AM-TDHO1.LIT<br />
    1214 bytes: atb-1297.zip:TX-ROKL.ASC<br />
    2332 bytes: imp-0494.zip:STATUS.ANS<br />
    3218 bytes: acepak03.zip:TR-STAT5.ANS<br />
    6068 bytes: lgc-0193.zip:LGC-0193.MEM<br />
    16778 bytes: purg-20.zip:EZ-HIR~1.JPG<br />
    20582 bytes: utd0495.zip:LT-CROW3.ANS<br />
    26237 bytes: quad0597.zip:MR-QPWP.GIF<br />
    29208 bytes: mx-pack17.zip:mx-mobile-source-logo.jpg<br />
    ----<br />
    109440 bytes total<br />

    A few notes about that list : Some of those filenames are comprised primarily of control characters. 133t, and all that. The first file is 0 bytes. I wondered if I should discard 0-length files but decided to keep those in, especially if they exercise lines that wouldn’t normally be activated. Also, there are a few JPEG and GIF files in the set. I should point out that I forced the tty demuxer using -f tty and there isn’t much in the way of signatures for this format. So, again, whatever exercises more lines is better.

    Using this same corpus, I tried approach 2– which single sample exercises the most lines of the decoder ? Answer : blde9502.zip:REQUEST.EXE. Huh. I checked it out and ‘file’ ID’s it as a MS-DOS executable. So, that approach wasn’t fruitful, at least not for this corpus since I’m forcing everything through this narrow code path.

    Think About The Future
    Where can I take this next ? The cloud ! I have people inside the search engine industry who have furnished me with extensive lists of specific types of multimedia files from around the internet. I also see that Amazon Web Services Elastic Compute Cloud (AWS EC2) instances don’t charge for incoming bandwidth.

    I think you can see where I’m going with this.

    See Also :

  • Server Move For multimedia.cx

    1er août 2014, par Multimedia Mike — General

    I made a big change to multimedia.cx last week : I moved hosting from a shared web hosting plan that I had been using for 10 years to a dedicated virtual private server (VPS). In short, I now have no one to blame but myself for any server problems I experience from here on out.

    The tipping point occurred a few months ago when my game music search engine kept breaking regardless of what technology I was using. First, I had an admittedly odd C-based CGI solution which broke due to mysterious binary compatibility issues, the sort that are bound to occur when trying to make a Linux binary run on heterogeneous distributions. The second solution was an SQLite-based solution. Like the first solution, this worked great until it didn’t work anymore. Something else mysteriously broke vis-à-vis PHP and SQLite on my server. I started investigating a MySQL-based full text search solution but couldn’t make it work, and decided that I shouldn’t have to either.

    Ironically, just before I finished this entire move operation, I noticed that my SQLite-based FTS solution was working again on the old shared host. I’m not sure when that problem went away. No matter, I had already thrown the switch.

    How Hard Could It Be ?
    We all have thresholds for the type of chores we’re willing to put up with and which we’d rather pay someone else to perform. For the past 10 years, I felt that administering a website’s underlying software is something that I would rather pay someone else to worry about. To be fair, 10 years ago, I don’t think VPSs were a thing, or at least a viable thing in the consumer space, and I wouldn’t have been competent enough to properly administer one. Though I would have been a full-time Linux user for 5 years at that point, I was still the type to build all of my own packages from source (I may have still been running Linux From Scratch 10 years ago) which might not be the most tractable solution for server stability.

    These days, VPSs are a much more affordable option (easily competitive with shared web hosting). I also realized I know exactly how to install and configure all the software that runs the main components of the various multimedia.cx sites, having done it on local setups just to ensure that my automated backups would actually be useful in the event of catastrophe.

    All I needed was the will to do it.

    The Switchover Process
    Here’s the rough plan :

    • Investigate options for both VPS providers and mail hosts– I might be willing to run a web server but NOT a mail server
    • Start plotting several months in advance of my yearly shared hosting renewal date
    • Screw around for several months, playing video games and generally finding reasons to put off the move
    • Panic when realizing there are only a few days left before the yearly renewal comes due

    So that’s the planning phase. BTW, I chose Digital Ocean for VPS and Zoho for email hosting. Here’s the execution phase I did last week :

    • Register with Digital Ocean and set up DNS entries to point to the old shared host for the time being
    • Once the D-O DNS servers respond correctly using a manual ‘dig’ command, use their servers as the authoritative ones for multimedia.cx
    • Create a new Droplet (D-O VPS), install all the right software, move the databases, upload the files ; and exhaustively document each step, gotcha, and pitfall ; treat a VPS as necessarily disposable and have an eye towards iterating the process with a new VPS
    • Use /etc/hosts on a local machine to point DNS to the new server and verify that each site is working correctly
    • After everything looks all right, update the DNS records to point to the new server

    Finally, flip the switch on the MX record by pointing it to the new email provider.

    Improvements and Problems
    Hosting on Digital Ocean is quite amazing so far. Maybe it’s the SSDs. Whatever it is, all the sites are performing far better than on the old shared web host. People who edit the MultimediaWiki report that changes get saved in less than the 10 or so seconds required on the old server.

    Again, all problems are now my problems. A sore spot with the shared web host was general poor performance. The hosting company would sometimes complain that my sites were using too much CPU. I would have loved to try to optimize things. However, the cPanel interface found on many shared hosts don’t give you a great deal of data for debugging performance problems. However, same sites, same software, same load on the VPS is considerably more performant.

    Problem : I’ve already had the MySQL database die due to a spike in usage. I had to manually restart it. I was considering a cron-based solution to check if the server is running and restart it if not. In response to my analysis that my databases are mostly read and not often modified, so db crashes shouldn’t be too disastrous, a friend helpfully reminded me that, “You would not make a good sysadmin with attitudes like ‘an occasional crash is okay’.”

    To this end, I am planning to migrate the database server to a separate VPS. This is a strategy that even Digital Ocean recommends. I’m hoping that the MySQL server isn’t subject to such memory spikes, but I’ll continue to monitor it after I set it up.

    Overall, the server continues to get modest amounts of traffic. I predict it will remain that way unless Dark Shikari resurrects the x264dev blog. The biggest spike that multimedia.cx ever saw was when Steve Jobs linked to this WebM post.

    Dropped Sites
    There are a bunch of subdomains I dropped because I hadn’t done anything with them for years and I doubt anyone will notice they’re gone. One notable section that I decided to drop is the samples.mplayerhq.hu archive. It will live on, but it will be hosted by samples.ffmpeg.org, which had a full mirror anyway. The lower-end VPS instances don’t have the 53 GB necessary.

    Going Forward
    Here’s to another 10 years of multimedia.cx, even if multimedia isn’t as exciting as it was 10 years ago (personal opinion ; I’ll have another post on this later). But at least I can get working on some other projects now that this is done. For the past 4 months or so, whenever I think of doing some other project, I always remembered that this server move took priority over everything else.