
Recherche avancée
Médias (91)
-
DJ Z-trip - Victory Lap : The Obama Mix Pt. 2
15 septembre 2011
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Matmos - Action at a Distance
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
DJ Dolores - Oslodum 2004 (includes (cc) sample of “Oslodum” by Gilberto Gil)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Danger Mouse & Jemini - What U Sittin’ On ? (starring Cee Lo and Tha Alkaholiks)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Cornelius - Wataridori 2
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Rapture - Sister Saviour (Blackstrobe Remix)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (46)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (5943)
-
VLC dead input for RTP stream
27 mars, par CaptainCheeseI'm working on creating an rtp stream that's meant to display live waveform data from Pioneer prolink players. The motivation for sending this video out is to be able to receive it in a flutter frontend. I initially was just sending a base-24 encoding of the raw ARGB packed ints per frame across a Kafka topic to it but processing this data in flutter proved to be untenable and was bogging down the main UI thread. Not sure if this is the most optimal way of going about this but just trying to get anything to work if it means some speedup on the frontend. So the issue the following implementation is experiencing is that when I run
vlc --rtsp-timeout=120000 --network-caching=30000 -vvvv stream_1.sdp
where

% cat stream_1.sdp
v=0
o=- 0 1 IN IP4 127.0.0.1
s=RTP Stream
c=IN IP4 127.0.0.1
t=0 0
a=tool:libavformat
m=video 5007 RTP/AVP 96
a=rtpmap:96 H264/90000



I see (among other questionable logs) the following :


[0000000144c44d10] live555 demux error: no data received in 10s, aborting
[00000001430ee2f0] main input debug: EOF reached
[0000000144b160c0] main decoder debug: killing decoder fourcc `h264'
[0000000144b160c0] main decoder debug: removing module "videotoolbox"
[0000000144b164a0] main packetizer debug: removing module "h264"
[0000000144c44d10] main demux debug: removing module "live555"
[0000000144c45bb0] main stream debug: removing module "record"
[0000000144a64960] main stream debug: removing module "cache_read"
[0000000144c29c00] main stream debug: removing module "filesystem"
[00000001430ee2f0] main input debug: Program doesn't contain anymore ES
[0000000144806260] main playlist debug: dead input
[0000000144806260] main playlist debug: changing item without a request (current 0/1)
[0000000144806260] main playlist debug: nothing to play
[0000000142e083c0] macosx interface debug: Playback has been ended
[0000000142e083c0] macosx interface debug: Releasing IOKit system sleep blocker (37463)



This is sort of confusing because when I run
ffmpeg -protocol_whitelist file,crypto,data,rtp,udp -i stream_1.sdp -vcodec libx264 -f null -

I see a number logs about

[h264 @ 0x139304080] non-existing PPS 0 referenced
 Last message repeated 1 times
[h264 @ 0x139304080] decode_slice_header error
[h264 @ 0x139304080] no frame!



After which I see the stream is received and I start getting telemetry on it :


Input #0, sdp, from 'stream_1.sdp':
 Metadata:
 title : RTP Stream
 Duration: N/A, start: 0.016667, bitrate: N/A
 Stream #0:0: Video: h264 (Constrained Baseline), yuv420p(progressive), 1200x200, 60 fps, 60 tbr, 90k tbn
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0x107f04f40] using cpu capabilities: ARMv8 NEON
[libx264 @ 0x107f04f40] profile High, level 3.1, 4:2:0, 8-bit
Output #0, null, to 'pipe:':
 Metadata:
 title : RTP Stream
 encoder : Lavf61.7.100
 Stream #0:0: Video: h264, yuv420p(tv, progressive), 1200x200, q=2-31, 60 fps, 60 tbn
 Metadata:
 encoder : Lavc61.19.101 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
[out#0/null @ 0x60000069c000] video:144KiB audio:0KiB subtitle:0KiB other streams:0KiB global headers:0KiB muxing overhead: unknown
frame= 1404 fps= 49 q=-1.0 Lsize=N/A time=00:00:23.88 bitrate=N/A speed=0.834x



Not sure why VLC is turning me down like some kind of Berghain bouncer that lets nobody in the entire night.


I initially tried just converting the ARGB ints to a YUV420p buffer and used this to create the Frame objects but I couldn't for the life of me figure out how to properly initialize it as the attempts I made kept spitting out garbled junk.


Please go easy on me, I've made an unhealthy habit of resolving nearly all of my coding questions by simply lurking the internet for answers but that's not really helping me solve this issue.


Here's the Java I'm working on (the meat of the rtp comms occurs within
updateWaveformForPlayer()
) :

package com.bugbytz.prolink;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.bytedeco.ffmpeg.global.avcodec;
import org.bytedeco.ffmpeg.global.avutil;
import org.bytedeco.javacv.FFmpegFrameGrabber;
import org.bytedeco.javacv.FFmpegFrameRecorder;
import org.bytedeco.javacv.FFmpegLogCallback;
import org.bytedeco.javacv.Frame;
import org.bytedeco.javacv.FrameGrabber;
import org.deepsymmetry.beatlink.CdjStatus;
import org.deepsymmetry.beatlink.DeviceAnnouncement;
import org.deepsymmetry.beatlink.DeviceAnnouncementAdapter;
import org.deepsymmetry.beatlink.DeviceFinder;
import org.deepsymmetry.beatlink.Util;
import org.deepsymmetry.beatlink.VirtualCdj;
import org.deepsymmetry.beatlink.data.BeatGridFinder;
import org.deepsymmetry.beatlink.data.CrateDigger;
import org.deepsymmetry.beatlink.data.MetadataFinder;
import org.deepsymmetry.beatlink.data.TimeFinder;
import org.deepsymmetry.beatlink.data.WaveformDetail;
import org.deepsymmetry.beatlink.data.WaveformDetailComponent;
import org.deepsymmetry.beatlink.data.WaveformFinder;

import java.awt.*;
import java.awt.image.BufferedImage;
import java.io.File;
import java.nio.ByteBuffer;
import java.text.DecimalFormat;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.ScheduledFuture;
import java.util.concurrent.TimeUnit;

import static org.bytedeco.ffmpeg.global.avutil.AV_PIX_FMT_RGB24;

public class App {
 public static ArrayList<track> tracks = new ArrayList<>();
 public static boolean dbRead = false;
 public static Properties props = new Properties();
 private static Map recorders = new HashMap<>();
 private static Map frameCount = new HashMap<>();

 private static final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
 private static final int FPS = 60;
 private static final int FRAME_INTERVAL_MS = 1000 / FPS;

 private static Map schedules = new HashMap<>();

 private static Set<integer> streamingPlayers = new HashSet<>();

 public static String byteArrayToMacString(byte[] macBytes) {
 StringBuilder sb = new StringBuilder();
 for (int i = 0; i < macBytes.length; i++) {
 sb.append(String.format("%02X%s", macBytes[i], (i < macBytes.length - 1) ? ":" : ""));
 }
 return sb.toString();
 }

 private static void updateWaveformForPlayer(int player) throws Exception {
 Integer frame_for_player = frameCount.get(player);
 if (frame_for_player == null) {
 frame_for_player = 0;
 frameCount.putIfAbsent(player, frame_for_player);
 }

 if (!WaveformFinder.getInstance().isRunning()) {
 WaveformFinder.getInstance().start();
 }
 WaveformDetail detail = WaveformFinder.getInstance().getLatestDetailFor(player);

 if (detail != null) {
 WaveformDetailComponent component = (WaveformDetailComponent) detail.createViewComponent(
 MetadataFinder.getInstance().getLatestMetadataFor(player),
 BeatGridFinder.getInstance().getLatestBeatGridFor(player)
 );
 component.setMonitoredPlayer(player);
 component.setPlaybackState(player, TimeFinder.getInstance().getTimeFor(player), true);
 component.setAutoScroll(true);
 int width = 1200;
 int height = 200;
 Dimension dimension = new Dimension(width, height);
 component.setPreferredSize(dimension);
 component.setSize(dimension);
 component.setScale(1);
 component.doLayout();

 // Create a fresh BufferedImage and clear it before rendering
 BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
 Graphics2D g = image.createGraphics();
 g.clearRect(0, 0, width, height); // Clear any old content

 // Draw waveform into the BufferedImage
 component.paint(g);
 g.dispose();

 int port = 5004 + player;
 String inputFile = port + "_" + frame_for_player + ".mp4";
 // Initialize the FFmpegFrameRecorder for YUV420P
 FFmpegFrameRecorder recorder_file = new FFmpegFrameRecorder(inputFile, width, height);
 FFmpegLogCallback.set(); // Enable FFmpeg logging for debugging
 recorder_file.setFormat("mp4");
 recorder_file.setVideoCodec(avcodec.AV_CODEC_ID_H264);
 recorder_file.setPixelFormat(avutil.AV_PIX_FMT_YUV420P); // Use YUV420P format directly
 recorder_file.setFrameRate(FPS);

 // Set video options
 recorder_file.setVideoOption("preset", "ultrafast");
 recorder_file.setVideoOption("tune", "zerolatency");
 recorder_file.setVideoOption("x264-params", "repeat-headers=1");
 recorder_file.setGopSize(FPS);
 try {
 recorder_file.start(); // Ensure this is called before recording any frames
 System.out.println("Recorder started successfully for player: " + player);
 } catch (org.bytedeco.javacv.FFmpegFrameRecorder.Exception e) {
 e.printStackTrace();
 }

 // Get all pixels in one call
 int[] pixels = new int[width * height];
 image.getRGB(0, 0, width, height, pixels, 0, width);
 recorder_file.recordImage(width,height,Frame.DEPTH_UBYTE,1,3 * width, AV_PIX_FMT_RGB24, ByteBuffer.wrap(argbToByteArray(pixels, width, height)));
 recorder_file.stop();
 recorder_file.release();
 final FFmpegFrameRecorder recorder = recorders.get(player);
 FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(inputFile);


 try {
 grabber.start();
 } catch (Exception e) {
 e.printStackTrace();
 }
 if (recorder == null) {
 try {
 String outputStream = "rtp://127.0.0.1:" + port;
 FFmpegFrameRecorder initial_recorder = new FFmpegFrameRecorder(outputStream, grabber.getImageWidth(), grabber.getImageHeight());
 initial_recorder.setFormat("rtp");
 initial_recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);
 initial_recorder.setPixelFormat(avutil.AV_PIX_FMT_YUV420P);
 initial_recorder.setFrameRate(grabber.getFrameRate());
 initial_recorder.setGopSize(FPS);
 initial_recorder.setVideoOption("x264-params", "keyint=60");
 initial_recorder.setVideoOption("rtsp_transport", "tcp");
 initial_recorder.start();
 recorders.putIfAbsent(player, initial_recorder);
 frameCount.putIfAbsent(player, 0);
 putToRTP(player, grabber, initial_recorder);
 }
 catch (Exception e) {
 e.printStackTrace();
 }
 }
 else {
 putToRTP(player, grabber, recorder);
 }
 File file = new File(inputFile);
 if (file.exists() && file.delete()) {
 System.out.println("Successfully deleted file: " + inputFile);
 } else {
 System.out.println("Failed to delete file: " + inputFile);
 }
 }
 }

 public static void putToRTP(int player, FFmpegFrameGrabber grabber, FFmpegFrameRecorder recorder) throws FrameGrabber.Exception {
 final Frame frame = grabber.grabFrame();
 int frameCount_local = frameCount.get(player);
 frame.keyFrame = frameCount_local++ % FPS == 0;
 frameCount.put(player, frameCount_local);
 try {
 recorder.record(frame);
 } catch (FFmpegFrameRecorder.Exception e) {
 throw new RuntimeException(e);
 }
 }
 public static byte[] argbToByteArray(int[] argb, int width, int height) {
 int totalPixels = width * height;
 byte[] byteArray = new byte[totalPixels * 3]; // 4 bytes per pixel (ARGB)

 for (int i = 0; i < totalPixels; i++) {
 int argbPixel = argb[i];

 byteArray[i * 3] = (byte) ((argbPixel >> 16) & 0xFF); // Red
 byteArray[i * 3 + 1] = (byte) ((argbPixel >> 8) & 0xFF); // Green
 byteArray[i * 3 + 2] = (byte) (argbPixel & 0xFF); // Blue
 }

 return byteArray;
 }


 public static void main(String[] args) throws Exception {
 VirtualCdj.getInstance().setDeviceNumber((byte) 4);
 CrateDigger.getInstance().addDatabaseListener(new DBService());
 props.put("bootstrap.servers", "localhost:9092");
 props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
 props.put("value.serializer", "com.bugbytz.prolink.CustomSerializer");
 props.put(ProducerConfig.MAX_REQUEST_SIZE_CONFIG, "20971520");

 VirtualCdj.getInstance().addUpdateListener(update -> {
 if (update instanceof CdjStatus) {
 try (Producer producer = new KafkaProducer<>(props)) {
 DecimalFormat df_obj = new DecimalFormat("#.##");
 DeviceStatus deviceStatus = new DeviceStatus(
 update.getDeviceNumber(),
 ((CdjStatus) update).isPlaying() || !((CdjStatus) update).isPaused(),
 ((CdjStatus) update).getBeatNumber(),
 update.getBeatWithinBar(),
 Double.parseDouble(df_obj.format(update.getEffectiveTempo())),
 Double.parseDouble(df_obj.format(Util.pitchToPercentage(update.getPitch()))),
 update.getAddress().getHostAddress(),
 byteArrayToMacString(DeviceFinder.getInstance().getLatestAnnouncementFrom(update.getDeviceNumber()).getHardwareAddress()),
 ((CdjStatus) update).getRekordboxId(),
 update.getDeviceName()
 );
 ProducerRecord record = new ProducerRecord<>("device-status", "device-" + update.getDeviceNumber(), deviceStatus);
 try {
 producer.send(record).get();
 } catch (InterruptedException ex) {
 throw new RuntimeException(ex);
 } catch (ExecutionException ex) {
 throw new RuntimeException(ex);
 }
 producer.flush();
 if (!WaveformFinder.getInstance().isRunning()) {
 try {
 WaveformFinder.getInstance().start();
 } catch (Exception ex) {
 throw new RuntimeException(ex);
 }
 }
 }
 }
 });
 DeviceFinder.getInstance().addDeviceAnnouncementListener(new DeviceAnnouncementAdapter() {
 @Override
 public void deviceFound(DeviceAnnouncement announcement) {
 if (!streamingPlayers.contains(announcement.getDeviceNumber())) {
 streamingPlayers.add(announcement.getDeviceNumber());
 schedules.putIfAbsent(announcement.getDeviceNumber(), scheduler.scheduleAtFixedRate(() -> {
 try {
 Runnable task = () -> {
 try {
 updateWaveformForPlayer(announcement.getDeviceNumber());
 } catch (InterruptedException e) {
 System.out.println("Thread interrupted");
 } catch (Exception e) {
 throw new RuntimeException(e);
 }
 System.out.println("Lambda thread work completed!");
 };
 task.run();
 } catch (Exception e) {
 e.printStackTrace();
 }
 }, 0, FRAME_INTERVAL_MS, TimeUnit.MILLISECONDS));
 }
 }

 @Override
 public void deviceLost(DeviceAnnouncement announcement) {
 if (streamingPlayers.contains(announcement.getDeviceNumber())) {
 schedules.get(announcement.getDeviceNumber()).cancel(true);
 streamingPlayers.remove(announcement.getDeviceNumber());
 }
 }
 });
 BeatGridFinder.getInstance().start();
 MetadataFinder.getInstance().start();
 VirtualCdj.getInstance().start();
 TimeFinder.getInstance().start();
 DeviceFinder.getInstance().start();
 CrateDigger.getInstance().start();

 try {
 LoadCommandConsumer consumer = new LoadCommandConsumer("localhost:9092", "load-command-group");
 Thread consumerThread = new Thread(consumer::startConsuming);
 consumerThread.start();

 Runtime.getRuntime().addShutdownHook(new Thread(() -> {
 consumer.shutdown();
 try {
 consumerThread.join();
 } catch (InterruptedException e) {
 Thread.currentThread().interrupt();
 }
 }));
 Thread.sleep(60000);
 } catch (InterruptedException e) {
 System.out.println("Interrupted, exiting.");
 }
 }
}
</integer></track>


-
What Is Data Ethics & Why Is It Important in Business ?
9 mai 2024, par Erin -
My journey to Coviu
27 octobre 2015, par silviaMy new startup just released our MVP – this is the story of what got me here.
I love creating new applications that let people do their work better or in a manner that wasn’t possible before.
My first such passion was as a student intern when I built a system for a building and loan association’s monthly customer magazine. The group I worked with was managing their advertiser contacts through a set of paper cards and I wrote a dBase based system (yes, that long ago) that would manage their customer relationships. They loved it – until it got replaced by an SAP system that cost 100 times what I cost them, had really poor UX, and only gave them half the functionality. It was a corporate system with ongoing support, which made all the difference to them.
The story repeated itself with a CRM for my Uncle’s construction company, and with a resume and quotation management system for Accenture right after Uni, both of which I left behind when I decided to go into research.
Even as a PhD student, I never lost sight of challenges that people were facing and wanted to develop technology to overcome problems. The aim of my PhD thesis was to prepare for the oncoming onslaught of audio and video on the Internet (yes, this was 1994 !) by developing algorithms to automatically extract and locate information in such files, which would enable users to structure, index and search such content.
Many of the use cases that we explored are now part of products or continue to be challenges : finding music that matches your preferences, identifying music or video pieces e.g. to count ads on the radio or to mark copyright infringement, or the automated creation of video summaries such as trailers.
This continued when I joined the CSIRO in Australia – I was working on segmenting speech into words or talk spurts since that would simplify captioning & subtitling, and on MPEG-7 which was a (slightly over-engineered) standard to structure metadata about audio and video.
In 2001 I had the idea of replicating the Web for videos : i.e. creating hyperlinked and searchable video-only experiences. We called it “Annodex” for annotated and indexed video and it needed full-screen hyperlinked video in browsers – man were we ahead of our time ! It was my first step into standards, got several IETF RFCs to my name, and started my involvement with open codecs through Xiph.
Around the time that YouTube was founded in 2006, I founded Vquence – originally a video search company for the Web, but pivoted to a video metadata mining company. Vquence still exists and continues to sell its data to channel partners, but it lacks the user impact that has always driven my work.
As the video element started being developed for HTML5, I had to get involved. I contributed many use cases to the W3C, became a co-editor of the HTML5 spec and focused on video captioning with WebVTT while contracting to Mozilla and later to Google. We made huge progress and today the technology exists to publish video on the Web with captions, making the Web more inclusive for everybody. I contributed code to YouTube and Google Chrome, but was keen to make a bigger impact again.
The opportunity came when a couple of former CSIRO colleagues who now worked for NICTA approached me to get me interested in addressing new use cases for video conferencing in the context of WebRTC. We worked on a kiosk-style solution to service delivery for large service organisations, particularly targeting government. The emerging WebRTC standard posed many technical challenges that we addressed by building rtc.io , by contributing to the standards, and registering bugs on the browsers.
Fast-forward through the development of a few further custom solutions for customers in health and education and we are starting to see patterns of need emerge. The core learning that we’ve come away with is that to get things done, you have to go beyond “talking heads” in a video call. It’s not just about seeing the other person, but much more about having a shared view of the things that need to be worked on and a shared way of interacting with them. Also, we learnt that the things that are being worked on are quite varied and may include multiple input cameras, digital documents, Web pages, applications, device data, controls, forms.
So we set out to build a solution that would enable productive remote collaboration to take place. It would need to provide an excellent user experience, it would need to be simple to work with, provide for the standard use cases out of the box, yet be architected to be extensible for specialised data sharing needs that we knew some of our customers had. It would need to be usable directly on Coviu.com, but also able to integrate with specialised applications that some of our customers were already using, such as the applications that they spend most of their time in (CRMs, practice management systems, learning management systems, team chat systems). It would need to require our customers to sign up, yet their clients to join a call without sign-up.
Collaboration is a big problem. People are continuing to get more comfortable with technology and are less and less inclined to travel distances just to get a service done. In a country as large as Australia, where 12% of the population lives in rural and remote areas, people may not even be able to travel distances, particularly to receive or provide recurring or specialised services, or to achieve work/life balance. To make the world a global village, we need to be able to work together better remotely.
The need for collaboration is being recognised by specialised Web applications already, such as the LiveShare feature of Invision for Designers, Codassium for pair programming, or the recently announced Dropbox Paper. Few go all the way to video – WebRTC is still regarded as a complicated feature to support.
With Coviu, we’d like to offer a collaboration feature to every Web app. We now have a Web app that provides a modern and beautifully designed collaboration interface. To enable other Web apps to integrate it, we are now developing an API. Integration may entail customisation of the data sharing part of Coviu – something Coviu has been designed for. How to replicate the data and keep it consistent when people collaborate remotely – that is where Coviu makes a difference.
We have started our journey and have just launched free signup to the Coviu base product, which allows individuals to own their own “room” (i.e. a fixed URL) in which to collaborate with others. A huge shout out goes to everyone in the Coviu team – a pretty amazing group of people – who have turned the app from an idea to reality. You are all awesome !
With Coviu you can share and annotate :
- images (show your mum photos of your last holidays, or get feedback on an architecture diagram from a customer),
- pdf files (give a presentation remotely, or walk a customer through a contract),
- whiteboards (brainstorm with a colleague), and
- share an application window (watch a YouTube video together, or work through your task list with your colleagues).
All of these are regarded as “shared documents” in Coviu and thus have zooming and annotations features and are listed in a document tray for ease of navigation.
This is just the beginning of how we want to make working together online more productive. Give it a go and let us know what you think.
The post My journey to Coviu first appeared on ginger’s thoughts.