Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (53)

  • MediaSPIP en mode privé (Intranet)

    17 septembre 2013, par

    À partir de la version 0.3, un canal de MediaSPIP peut devenir privé, bloqué à toute personne non identifiée grâce au plugin "Intranet/extranet".
    Le plugin Intranet/extranet, lorsqu’il est activé, permet de bloquer l’accès au canal à tout visiteur non identifié, l’empêchant d’accéder au contenu en le redirigeant systématiquement vers le formulaire d’identification.
    Ce système peut être particulièrement utile pour certaines utilisations comme : Atelier de travail avec des enfants dont le contenu ne doit pas (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (4161)

  • On-premise analytics demand grows as Google Analytics GDPR uncertainties continue

    7 janvier 2020, par Jake Thornton — Privacy

    The Google Analytics GDPR relationship is a complicated one. Website owners in states like Berlin in Germany are now required to ask users for consent to collect their data. This doesn’t make for the friendliest user-experience and often the website visitor will simply click “no.”

    The problem Google Analytics now presents website owners in the EU is with more visitors clicking “no”, the less accurate your data will become.

    Why do you need to ask your visitors for consent ?

    At this stage it’s simply because Google Analytics collects data for its own purposes. An example of this is using your visitor’s personal data for retargeting purposes across their advertising platforms like Google Ads and YouTube. 

    Google’s Privacy & Terms states : “when you visit a website that uses advertising services like AdSense, including analytics tools like Google Analytics, or embeds video content from YouTube, your web browser automatically sends certain information to Google. This includes the URL of the page you’re visiting and your IP address. We may also set cookies on your browser or read cookies that are already there. Apps that use Google advertising services also share information with Google, such as the name of the app and a unique identifier for advertising.”

    The rise of hosting web analytics on-premise

    Managing Google Analytics and GDPR can quickly become complicated, so there’s been an increase in website owners switching from cloud-hosted web analytics platforms, like Google Analytics, to more GDPR compliant alternatives, where you can host web analytics software on your own servers. This is called hosting web analytics on-premise.

    Hosting web analytics on your own servers means :

    No third-parties are involved

    The visitor data your website collects is stored on your own internal infrastructure. This means no third-parties are involved and there’s no risk of personal data being used in the way Google Analytics uses it e.g. sending personal data to its advertising platforms. 

    When you sign up with Google Analytics you sign away control of your user’s personal data. With on-premise website analytics, you own your data and are in full control.

    NOTE : Though Google Analytics uses personal data for its own purposes, not all cloud hosted web analytics platforms do this. As an example, Matomo Analytics Cloud hosted solution states that all personal data collected is not used for its own purposes and that Matomo has no rights in accessing or using this personal data. 

    You control where in the world your personal data is stored

    Google Analytics servers are based out of USA, Europe and Asia, so where your personal data will end up is uncertain and you don’t have the option to choose which location it goes to when using free Google Analytics.

    Different countries have different laws when it comes to accessing personal data. When you choose to host your web analytics on-premise, you can choose the location of your servers and where the personal data is stored.

    More flexibility

    With self-hosted web analytics platforms like Matomo On-Premise, you can extend the platform to do anything you want without the restrictions that cloud hosted platforms impose.

    You can :

    • Get full access to the source code of open-source solutions, like Matomo
    • Extend the platform however you want for your business
    • Get access to APIs
    • Have no data limitations or restrictions
    • Get RAW data access
    • Have control over security

    >> Read more about on-premise flexibility for web analytics here

    So what does the future look like for Google Analytics and GDPR ?

    It’s difficult to assess this right now. How exactly GDPR is enforced is still quite unclear. 

    What is clear however, is now website owners in Berlin using Google Analytics are lawfully required to ask their visitors for consent to collect personal data. It has been reported that Google Analytics has already received 200,000 complaints in Germany alone and it appears this trend is likely to continue across much of the EU.

    When using Google Analytics in the EU you must also ensure your privacy policy is updated so website visitors are aware that data is being collected through Google Analytics for its own purposes.

    Moving to a web analytics on-premise platform

    Matomo Analytics is the #1 open-source web analytics platform in the world and has been rated as an exceptional alternative to Google Analytics. Check the reviews on Capterra.

    Choosing Matomo On-Premise means you can control exactly where your data is stored, you have full flexibility to customise the platform to do what you want and it’s FREE.

    Matomo’s mission is to give control back to website owners and the team has designed the platform so that moving away from Google Analytics is seamless. Matomo offers most of your favourite Google Analytics features, a leaner interface to navigate, and the option to add free and paid premium features that Google Analytics can’t even offer you.

    And now you can import your historical Google Analytics data directly into your Matomo with the Google Analytics Importer plugin.

    And if you can’t host web analytics on your own servers ...

    Hosting web analytics on-premise is not an option for all businesses as you do need the internal infrastructure and technical knowledge to host your own platform.

    If you can’t self-host, then Matomo has a Cloud hosted solution you can easily install and operate like Google Analytics, which is hosted on Matomo’s servers in the EU. 

    The GDPR advantages of choosing Matomo Cloud over Google Analytics are :

    • Servers are secure and based in the EU (strict laws forbid outside access)
    • 100% data ownership – we never use data for our own purposes
    • You can export your data anytime and switch to Matomo On-Premise whenever you like
    • User-privacy protection
    • Advanced GDPR Manager and data anonymisation features which GA doesn’t offer

    Interested to learn more ?

    If you are wanting to learn more about why users are making the move from Google Analytics to Matomo, check out our Matomo Analytics vs Google Analytics comparison page.

    >> Matomo Analytics vs Google Analytics

  • Stream ffmpeg transcoding result to S3

    7 juin 2019, par mabead

    I want to transcode a large file using FFMPEG and store the result directly on AWS S3. This will be done inside of an AWS Lambda that has limited tmp space so I can’t store the transcoding result locally and then upload it to S3 in a second step. I won’t have enough tmp space. I therefore want to store the FFMPEG output directly on S3.

    I therefore created a S3 pre-signed url that allows ’PUT’ :

    var outputPath = s3Client.GetPreSignedURL(new Amazon.S3.Model.GetPreSignedUrlRequest
    {
       BucketName = "my-bucket",
       Expires = DateTime.UtcNow.AddMinutes(5),
       Key = "output.mp3",
       Verb = HttpVerb.PUT,
    });

    I then called ffmpeg with the resulting pre-signed url :

    ffmpeg -i C:\input.wav -y -vn -ar 44100 -ac 2 -ab 192k -f mp3 https://my-bucket.s3.amazonaws.com/output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550427237&Signature=%2BE8Wc%2F%2FQYrvGxzc%2FgXnsvauKnac%3D

    FFMPEG returns an exit code of 1 with the following output :

    ffmpeg version N-93120-ga84af760b8 Copyright (c) 2000-2019 the FFmpeg developers
     built with gcc 8.2.1 (GCC) 20190212
     configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
     libavutil      56. 26.100 / 56. 26.100
     libavcodec     58. 47.100 / 58. 47.100
     libavformat    58. 26.101 / 58. 26.101
     libavdevice    58.  6.101 / 58.  6.101
     libavfilter     7. 48.100 /  7. 48.100
     libswscale      5.  4.100 /  5.  4.100
     libswresample   3.  4.100 /  3.  4.100
     libpostproc    55.  4.100 / 55.  4.100
    Guessed Channel Layout for Input Stream #0.0 : stereo
    Input #0, wav, from 'C:\input.wav':
     Duration: 00:04:16.72, bitrate: 3072 kb/s
       Stream #0:0: Audio: pcm_s32le ([1][0][0][0] / 0x0001), 48000 Hz, stereo, s32, 3072 kb/s
    Stream mapping:
     Stream #0:0 -> #0:0 (pcm_s32le (native) -> mp3 (libmp3lame))
    Press [q] to stop, [?] for help
    Output #0, mp3, to 'https://my-bucket.s3.amazonaws.com/output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550427237&Signature=%2BE8Wc%2F%2FQYrvGxzc%2FgXnsvauKnac%3D':
     Metadata:
       TSSE            : Lavf58.26.101
       Stream #0:0: Audio: mp3 (libmp3lame), 44100 Hz, stereo, s32p, 192 kb/s
       Metadata:
         encoder         : Lavc58.47.100 libmp3lame
    size=     577kB time=00:00:24.58 bitrate= 192.2kbits/s speed=49.1x    
    size=    1109kB time=00:00:47.28 bitrate= 192.1kbits/s speed=47.2x    
    [tls @ 000001d73d786b00] Error in the push function.
    av_interleaved_write_frame(): I/O error
    Error writing trailer of https://my-bucket.s3.amazonaws.com/output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550427237&Signature=%2BE8Wc%2F%2FQYrvGxzc%2FgXnsvauKnac%3D: I/O error
    size=    1143kB time=00:00:48.77 bitrate= 192.0kbits/s speed=  47x    
    video:0kB audio:1144kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
    [tls @ 000001d73d786b00] The specified session has been invalidated for some reason.
    [tls @ 000001d73d786b00] Error in the pull function.
    [https @ 000001d73d784fc0] URL read error:  -5
    Conversion failed!

    As you can see, I have a URL read error. This is a little surprising to me since I want to output to this url and not read it.

    Anybody know how I can store directly my FFMPEG output directly to S3 without having to store it locally first ?

    Edit 1
    I then tried to use the -method PUT parameter and use http instead of https to remove TLS from the equation. Here’s the output that I got when running ffmpeg with the -v trace option.

    ffmpeg version N-93120-ga84af760b8 Copyright (c) 2000-2019 the FFmpeg developers
     built with gcc 8.2.1 (GCC) 20190212
     configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
     libavutil      56. 26.100 / 56. 26.100
     libavcodec     58. 47.100 / 58. 47.100
     libavformat    58. 26.101 / 58. 26.101
     libavdevice    58.  6.101 / 58.  6.101
     libavfilter     7. 48.100 /  7. 48.100
     libswscale      5.  4.100 /  5.  4.100
     libswresample   3.  4.100 /  3.  4.100
     libpostproc    55.  4.100 / 55.  4.100
    Splitting the commandline.
    Reading option '-i' ... matched as input url with argument 'C:\input.wav'.
    Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
    Reading option '-vn' ... matched as option 'vn' (disable video) with argument '1'.
    Reading option '-ar' ... matched as option 'ar' (set audio sampling rate (in Hz)) with argument '44100'.
    Reading option '-ac' ... matched as option 'ac' (set number of audio channels) with argument '2'.
    Reading option '-ab' ... matched as option 'ab' (audio bitrate (please use -b:a)) with argument '192k'.
    Reading option '-f' ... matched as option 'f' (force format) with argument 'mp3'.
    Reading option '-method' ... matched as AVOption 'method' with argument 'PUT'.
    Reading option '-v' ... matched as option 'v' (set logging level) with argument 'trace'.
    Reading option 'https://my-bucket.s3.amazonaws.com/output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550695990&Signature=dy3RVqDlX%2BlJ0INlDkl0Lm1Rqb4%3D' ... matched as output url.
    Finished splitting the commandline.
    Parsing a group of options: global .
    Applying option y (overwrite output files) with argument 1.
    Applying option v (set logging level) with argument trace.
    Successfully parsed a group of options.
    Parsing a group of options: input url C:\input.wav.
    Successfully parsed a group of options.
    Opening an input file: C:\input.wav.
    [NULL @ 000001fb37abb180] Opening 'C:\input.wav' for reading
    [file @ 000001fb37abc180] Setting default whitelist 'file,crypto'
    Probing wav score:99 size:2048
    [wav @ 000001fb37abb180] Format wav probed with size=2048 and score=99
    [wav @ 000001fb37abb180] Before avformat_find_stream_info() pos: 54 bytes read:65590 seeks:1 nb_streams:1
    [wav @ 000001fb37abb180] parser not found for codec pcm_s32le, packets or times may be invalid.
       Last message repeated 1 times
    [wav @ 000001fb37abb180] All info found
    [wav @ 000001fb37abb180] stream 0: start_time: -192153584101141.156 duration: 256.716
    [wav @ 000001fb37abb180] format: start_time: -9223372036854.775 duration: 256.716 bitrate=3072 kb/s
    [wav @ 000001fb37abb180] After avformat_find_stream_info() pos: 204854 bytes read:294966 seeks:1 frames:50
    Guessed Channel Layout for Input Stream #0.0 : stereo
    Input #0, wav, from 'C:\input.wav':
     Duration: 00:04:16.72, bitrate: 3072 kb/s
       Stream #0:0, 50, 1/48000: Audio: pcm_s32le ([1][0][0][0] / 0x0001), 48000 Hz, stereo, s32, 3072 kb/s
    Successfully opened the file.
    Parsing a group of options: output url https://my-bucket.s3.amazonaws.com/output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550695990&Signature=dy3RVqDlX%2BlJ0INlDkl0Lm1Rqb4%3D.
    Applying option vn (disable video) with argument 1.
    Applying option ar (set audio sampling rate (in Hz)) with argument 44100.
    Applying option ac (set number of audio channels) with argument 2.
    Applying option ab (audio bitrate (please use -b:a)) with argument 192k.
    Applying option f (force format) with argument mp3.
    Successfully parsed a group of options.
    Opening an output file: https://my-bucket.s3.amazonaws.com/output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550695990&Signature=dy3RVqDlX%2BlJ0INlDkl0Lm1Rqb4%3D.
    [http @ 000001fb37b15140] Setting default whitelist 'http,https,tls,rtp,tcp,udp,crypto,httpproxy'
    [tcp @ 000001fb37b16c80] Original list of addresses:
    [tcp @ 000001fb37b16c80] Address 52.216.8.203 port 80
    [tcp @ 000001fb37b16c80] Interleaved list of addresses:
    [tcp @ 000001fb37b16c80] Address 52.216.8.203 port 80
    [tcp @ 000001fb37b16c80] Starting connection attempt to 52.216.8.203 port 80
    [tcp @ 000001fb37b16c80] Successfully connected to 52.216.8.203 port 80
    [http @ 000001fb37b15140] request: PUT /output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550695990&Signature=dy3RVqDlX%2BlJ0INlDkl0Lm1Rqb4%3D HTTP/1.1
    Transfer-Encoding: chunked
    User-Agent: Lavf/58.26.101
    Accept: */*
    Connection: close
    Host: landr-distribution-reportsdev-mb.s3.amazonaws.com
    Icy-MetaData: 1
    Successfully opened the file.
    Stream mapping:
     Stream #0:0 -> #0:0 (pcm_s32le (native) -> mp3 (libmp3lame))
    Press [q] to stop, [?] for help
    cur_dts is invalid (this is harmless if it occurs once at the start per stream)
    detected 8 logical cores
    [graph_0_in_0_0 @ 000001fb37b21080] Setting 'time_base' to value '1/48000'
    [graph_0_in_0_0 @ 000001fb37b21080] Setting 'sample_rate' to value '48000'
    [graph_0_in_0_0 @ 000001fb37b21080] Setting 'sample_fmt' to value 's32'
    [graph_0_in_0_0 @ 000001fb37b21080] Setting 'channel_layout' to value '0x3'
    [graph_0_in_0_0 @ 000001fb37b21080] tb:1/48000 samplefmt:s32 samplerate:48000 chlayout:0x3
    [format_out_0_0 @ 000001fb37b22cc0] Setting 'sample_fmts' to value 's32p|fltp|s16p'
    [format_out_0_0 @ 000001fb37b22cc0] Setting 'sample_rates' to value '44100'
    [format_out_0_0 @ 000001fb37b22cc0] Setting 'channel_layouts' to value '0x3'
    [format_out_0_0 @ 000001fb37b22cc0] auto-inserting filter 'auto_resampler_0' between the filter 'Parsed_anull_0' and the filter 'format_out_0_0'
    [AVFilterGraph @ 000001fb37b0d940] query_formats: 4 queried, 6 merged, 3 already done, 0 delayed
    [auto_resampler_0 @ 000001fb37b251c0] picking s32p out of 3 ref:s32
    [auto_resampler_0 @ 000001fb37b251c0] [SWR @ 000001fb37b252c0] Using fltp internally between filters
    [auto_resampler_0 @ 000001fb37b251c0] ch:2 chl:stereo fmt:s32 r:48000Hz -> ch:2 chl:stereo fmt:s32p r:44100Hz
    Output #0, mp3, to 'https://my-bucket.s3.amazonaws.com/output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550695990&Signature=dy3RVqDlX%2BlJ0INlDkl0Lm1Rqb4%3D':
     Metadata:
       TSSE            : Lavf58.26.101
       Stream #0:0, 0, 1/44100: Audio: mp3 (libmp3lame), 44100 Hz, stereo, s32p, delay 1105, 192 kb/s
       Metadata:
         encoder         : Lavc58.47.100 libmp3lame
    cur_dts is invalid (this is harmless if it occurs once at the start per stream)
       Last message repeated 6 times
    size=     649kB time=00:00:27.66 bitrate= 192.2kbits/s speed=55.3x    
    size=    1207kB time=00:00:51.48 bitrate= 192.1kbits/s speed=51.5x    
    av_interleaved_write_frame(): Unknown error
    No more output streams to write to, finishing.
    [libmp3lame @ 000001fb37b147c0] Trying to remove 47 more samples than there are in the queue
    Error writing trailer of https://my-bucket.s3.amazonaws.com/output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550695990&Signature=dy3RVqDlX%2BlJ0INlDkl0Lm1Rqb4%3D: Error number -10054 occurred
    size=    1251kB time=00:00:53.39 bitrate= 192.0kbits/s speed=51.5x    
    video:0kB audio:1252kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
    Input file #0 (C:\input.wav):
     Input stream #0:0 (audio): 5014 packets read (20537344 bytes); 5014 frames decoded (2567168 samples);
     Total: 5014 packets (20537344 bytes) demuxed
    Output file #0 (https://my-bucket.s3.amazonaws.com/output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550695990&Signature=dy3RVqDlX%2BlJ0INlDkl0Lm1Rqb4%3D):
     Output stream #0:0 (audio): 2047 frames encoded (2358144 samples); 2045 packets muxed (1282089 bytes);
     Total: 2045 packets (1282089 bytes) muxed
    5014 frames successfully decoded, 0 decoding errors
    [AVIOContext @ 000001fb37b1f440] Statistics: 0 seeks, 2046 writeouts
    [http @ 000001fb37b15140] URL read error:  -10054
    [AVIOContext @ 000001fb37ac4400] Statistics: 20611126 bytes read, 1 seeks
    Conversion failed!

    So it looks like it is able to connect to my S3 pre-signed url but I still have the Error writing trailer error coupled with a URL read error.

  • I Really Like My New EeePC

    29 août 2010, par Multimedia Mike — General

    Fair warning : I’m just going to use this post to blather disconnectedly about a new-ish toy.

    I really like my new EeePC. I was rather enamored with the original EeePC 701 from late 2007, a little box with a tiny 7″ screen that is credited with kicking off the netbook revolution. Since then, Asus has created about a hundred new EeePC models.

    Since I’m spending so much time on a train these days, I finally took the plunge to get a better netbook. I decided to stay loyal to Asus and their Eee lineage and got the highest end EeePC they presently offer (which was still under US$500)– the EeePC 1201PN. The ’12′ in the model number represents a 12″ screen size and the rest of the specs are commensurately as large. Indeed, it sort of blurs the line between netbook and full-blown laptop.



    Incidentally, after I placed the order for the 1201PN nearly 2 months ago, and I mean the very literal next moment, this Engadget headline came across announcing the EeePC 1215N. My new high-end (such as it is) computer purchase was immediately obsoleted ; I thought that only happened in parody. (As of this writing, the 1215N still doesn’t appear to be shipping, though.)

    It’s a sore point among Linux aficionados that Linux was used to help kickstart the netbook trend but that now it’s pretty much impossible to find Linux pre-installed on a netbook. So it is in this case. This 1201PN comes with Windows 7 Home Premium installed. This is a notable differentiator from most netbooks which only have Windows 7 Home Starter, a.k.a., the Windows 7 version so crippled that it doesn’t even allow the user to change the background image.

    I wished to preserve the Windows 7 installation (you never know when it will come in handy) and dual boot Linux. I thought I would have to use the Windows partition tool to divide work some magic. Fortunately, the default installation already carved the 250 GB HD in half ; I was able to reformat the second partition and install Linux. The details are a little blurry, but I’m pretty sure one of those external USB optical drives shown in my last post actually performed successfully for this task. Lucky break.



    The EeePC 1201PN, EeePC 701, Belco Alpha-400, and even a comparatively gargantuan Sony Vaio full laptop– all of the portable computers in the household

    So I got Ubuntu 10.04 Linux installed in short order. This feels like something of a homecoming for me. You see, I used Linux full-time at home from 1999-2006. In 2007, I switched to using Windows XP full-time, mostly because my home use-case switched to playing a lot of old, bad computer games. By the end of 2008, I had transitioned to using the Mac Mini that I had originally purchased earlier that year for running FATE cycles. That Mac served as my main home computer until I purchased the 1201PN 2 months ago.

    Mostly, I have this overriding desire for computers to just work, at least in their basic functions. And that’s why I’m so roundly impressed with the way Linux handles right out of the box. Nearly everything on the 1201PN works in Linux. The video, the audio, the wireless networking, the webcam, it all works out of the box. I had to do the extra installation step to get the binary nVidia drivers installed but even that’s relatively seamless, especially compared to “the way things used to be” (drop to a prompt, run some binary installer from the prompt as root, watch it fail in arcane ways because the thing is only certified to run on one version of one Linux distribution). The 1201PN, with its nVidia Ion2 graphics, is able to drive both its own 1366×768 screen simultaneously with an external monitor running at up on 2560×1600.

    The only weird hiccup in the whole process was that I had a little trouble with the special volume keys on the keyboard (specifically, the volume up/down/mute keys didn’t do anything). But I quickly learned that I had to install some package related to ACPI and they magically started to do the right thing. Now I get to encounter the Linux Flash Player bug where modifying volume via those special keys forces fullscreen mode to exit. Adobe really should fix that.

    Also, trackpad multitouch gestures don’t work right away. Based on my reading, it is possible to set those up in Linux. But it’s largely a preference thing– I don’t care much for multitouch. This creates a disparity when I use Windows 7 on the 1201PN which is configured per default to use multitouch.



    The same 4 laptops stacked up

    So, in short, I’m really happy with this little machine. Traditionally, I have had absolutely no affinity for laptops/notebooks/portable computers at all even if everyone around was always completely enamored with the devices. What changed for me ? Well for starters, as a long-time Linux user, I was used to having to invest in very specific, carefully-researched hardware lest I not be able to use it under the Linux OS. This was always a major problem in the laptop field which typically reign supreme in custom, proprietary hardware components. These days, not so much, and these netbooks seem to contain well-supported hardware. Then there’s the fact that laptops always cost so much more than similarly capable desktop systems and that I had no real reason for taking a computer with me when I left home. So my use case changed, as did the price point for relatively low-power laptops/netbooks.

    Data I/O geek note : The 1201PN is capable of wireless-N networking — as many netbooks seem to have — but only 100 Mbit ethernet. I wondered why it didn’t have gigabit ethernet. Then I remembered that 100 Mbit ethernet provides 11-11.5 Mbytes/sec of transfer speed which, in my empirical experience, is approximately the maximum write speed of a 5400 RPM hard drive– which is what the 1201PN possesses.