Recherche avancée

Médias (1)

Mot : - Tags -/ogv

Autres articles (93)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

  • Soumettre bugs et patchs

    10 avril 2011

    Un logiciel n’est malheureusement jamais parfait...
    Si vous pensez avoir mis la main sur un bug, reportez le dans notre système de tickets en prenant bien soin de nous remonter certaines informations pertinentes : le type de navigateur et sa version exacte avec lequel vous avez l’anomalie ; une explication la plus précise possible du problème rencontré ; si possibles les étapes pour reproduire le problème ; un lien vers le site / la page en question ;
    Si vous pensez avoir résolu vous même le bug (...)

Sur d’autres sites (7431)

  • Installing ffmpeg, librosa nad pydub in Apache Spark container

    17 avril 2023, par yaviens

    I'm working on a Python spark streaming project, that needs to use pydub and librosa to process audio, this libraries require having installed ffmpeg library. I'm having trubles to build the spark containers with this libraries, I don't know how to solve this problem.

    


    I use a docker-compose.yml to buil the images, define ports, etc of the spark master and workers.
docker-compose :

    


    version: "3.3"
services:
  spark-master:
    build:
      context: ./
      dockerfile: Dockerfile
    #image: docker.io/bitnami/spark:3.3
    ports:
      - "9090:8080"
      - "7077:7077"
    volumes:
       - ./apps:/opt/spark-apps
       - ./data:/opt/spark-data
       - ./data:/data
       - ./src:/src
       - ./output:/output
    environment:
      - SPARK_LOCAL_IP=spark-master
      - SPARK_WORKLOAD=master
  spark-worker-a:
    build:
      context: ./
      dockerfile: Dockerfile
    #image: docker.io/bitnami/spark:3.3
    ports:
      - "9091:8080"
      - "7000:7000"
    depends_on:
      - spark-master
    environment:
      - SPARK_MASTER=spark://spark-master:7077
      - SPARK_WORKER_CORES=1
      - SPARK_WORKER_MEMORY=1G
      - SPARK_DRIVER_MEMORY=1G
      - SPARK_EXECUTOR_MEMORY=1G
      - SPARK_WORKLOAD=worker
      - SPARK_LOCAL_IP=spark-worker-a
    volumes:
       - ./apps:/opt/spark-apps
       - ./data:/opt/spark-data
       - ./data:/data
       - ./src:/src
       - ./output:/output       
  spark-worker-b:
    build:
      context: ./
      dockerfile: Dockerfile
    #image: docker.io/bitnami/spark:3.3
    ports:
      - "9092:8080"
      - "7001:7000"
    depends_on:
      - spark-master
    environment:
      - SPARK_MASTER=spark://spark-master:7077
      - SPARK_WORKER_CORES=1
      - SPARK_WORKER_MEMORY=1G
      - SPARK_DRIVER_MEMORY=1G
      - SPARK_EXECUTOR_MEMORY=1G
      - SPARK_WORKLOAD=worker
      - SPARK_LOCAL_IP=spark-worker-b
    volumes:
        - ./apps:/opt/spark-apps
        - ./data:/opt/spark-data
        - ./data:/data
        - ./src:/src
        - ./output:/output 


    


    In the same path of the docker-compose.yml is the Dockerfile I'm using to build the image :

    


    # builder step used to download and configure spark environment
FROM openjdk:11.0.11-jre-slim-buster as builder

# Add Dependencies for PySpark
RUN apt-get update && apt-get install -y curl vim wget software-properties-common ssh net-tools ca-certificates python3 python3-pip python3-numpy python3-matplotlib python3-scipy python3-pandas python3-simpy

RUN update-alternatives --install "/usr/bin/python" "python" "$(which python3)" 1

# Fix the value of PYTHONHASHSEED
# Note: this is needed when you use Python 3.3 or greater
ENV SPARK_VERSION=3.0.2 \
HADOOP_VERSION=3.2 \
SPARK_HOME=/opt/spark \
PYTHONHASHSEED=1

# Download and uncompress spark from the apache archive
RUN wget --no-verbose -O apache-spark.tgz "https://archive.apache.org/dist/spark/spark-${SPARK_VERSION}/spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz" \
&& mkdir -p /opt/spark \
&& tar -xf apache-spark.tgz -C /opt/spark --strip-components=1 \
&& rm apache-spark.tgz


# Apache spark environment
FROM builder as apache-spark

WORKDIR /opt/spark

ENV SPARK_MASTER_PORT=7077 \
SPARK_MASTER_WEBUI_PORT=8080 \
SPARK_LOG_DIR=/opt/spark/logs \
SPARK_MASTER_LOG=/opt/spark/logs/spark-master.out \
SPARK_WORKER_LOG=/opt/spark/logs/spark-worker.out \
SPARK_WORKER_WEBUI_PORT=8080 \
SPARK_WORKER_PORT=7000 \
SPARK_MASTER="spark://spark-master:7077" \
SPARK_WORKLOAD="master"

EXPOSE 8080 7077 6066

RUN mkdir -p $SPARK_LOG_DIR && \
touch $SPARK_MASTER_LOG && \
touch $SPARK_WORKER_LOG && \
ln -sf /dev/stdout $SPARK_MASTER_LOG && \
ln -sf /dev/stdout $SPARK_WORKER_LOG

# Install ffmpeg lib
RUN apt-get -y update
RUN apt-get -y upgrade
RUN apt-get install -y ffmpeg
RUN apt-get -y install apt-utils gcc libpq-dev libsndfile-dev

#RUN apt-get update \
#&& apt-get upgrade -y \
#&& apt-get install -y \
#&& apt-get -y install apt-utils gcc libpq-dev libsndfile-dev

# Install required python libs
COPY requirements.txt .
RUN pip3 install -r requirements.txt

COPY start-spark.sh /

CMD ["/bin/bash", "https://net.cloudinfrastructureservices.co.uk/start-spark.sh"]


    


    The start-spark.sh :

    


    #!/bin/bash
. "https://net.cloudinfrastructureservices.co.uk/opt/spark/bin/load-spark-env.sh"
# When the spark work_load is master run class org.apache.spark.deploy.master.Master
if [ "$SPARK_WORKLOAD" == "master" ];
then

export SPARK_MASTER_HOST=`hostname`

cd /opt/spark/bin && ./spark-class org.apache.spark.deploy.master.Master --ip $SPARK_MASTER_HOST --port $SPARK_MASTER_PORT --webui-port $SPARK_MASTER_WEBUI_PORT >> $SPARK_MASTER_LOG

elif [ "$SPARK_WORKLOAD" == "worker" ];
then
# When the spark work_load is worker run class org.apache.spark.deploy.master.Worker
cd /opt/spark/bin && ./spark-class org.apache.spark.deploy.worker.Worker --webui-port $SPARK_WORKER_WEBUI_PORT $SPARK_MASTER >> $SPARK_WORKER_LOG

elif [ "$SPARK_WORKLOAD" == "submit" ];
then
    echo "SPARK SUBMIT"
else
    echo "Undefined Workload Type $SPARK_WORKLOAD, must specify: master, worker, submit"
fi


    


    When I execute : "docker-compose up" I have the next error message :

    


    [+] Running 4/4
 - Network spark_container_default             Created                                                                                0.9s 
 - Container spark_container-spark-master-1    Created                                                                                0.3s 
 - Container spark_container-spark-worker-a-1  Created                                                                                0.4s 
 - Container spark_container-spark-worker-b-1  Created                                                                                0.4s 
Attaching to spark_container-spark-master-1, spark_container-spark-worker-a-1, spark_container-spark-worker-b-1
spark_container-spark-master-1    | /bin/bash: https://net.cloudinfrastructureservices.co.uk/start-spark.sh: No such file or directory     
spark_container-spark-master-1 exited with code 127
spark_container-spark-worker-a-1  | /bin/bash: https://net.cloudinfrastructureservices.co.uk/start-spark.sh: No such file or directory     
spark_container-spark-worker-b-1  | /bin/bash: https://net.cloudinfrastructureservices.co.uk/start-spark.sh: No such file or directory     
spark_container-spark-worker-a-1 exited with code 127
spark_container-spark-worker-b-1 exited with code 127


    


  • Get video duration of file hosted on Amazon S3

    26 octobre 2016, par Michi

    I’m starting a portal which distributes videos. The idea is to upload the videos to Amazon S3 and gather the necessary data using PHP from my server. So far everything works fine... the only thing I could not manage to get is the duration of the video :-( Could anybody give me a hint on how to accomplish it ?

    Thanks,
    Miguel

    UPDATE :

    I finally opted to do it using FFmpeg. I have already installed FFmpeg on the server and I’m now trying to execute the command in the shell prior to execute it with PHP. I’m passing it the URL from Amazon (I tried both the cloudfront URL and the S3 URL) but it says that there is not such a directory or file. I’ve seen examples on the web using external files so I expected it to work.

    The command I’m using is

    ffmpeg -i https://s3-eu-west-1.amazonaws.com/path/to/file.m4v

    Is there something I need to configure in order to use external URLs ?

  • How to get video file length uploaded to Amazon S3 ?

    25 mai 2015, par TSP

    I am using plupload to upload video files to Amazon S3 and am playing it using JWPlayer. Before video file is played, I display list of video files uploaded to S3. In this list I would like to display the duration of the video.

    I have read the ffmpeg approach used with PHP. Is there a better approach to get the duration ?

    Regards