+33 (0)4 67 17 41 56 contact@videomenthe.fr French version



  • Collaborative Workflows in the Cloud: Embracing the Future

    Article originally published in the IABM Journal (3rd Quarter 2022)

    The pandemic has massively accelerated the demand for streaming video services, with audiences spending more time at home. In France for example, the SVOD market grew by 43% in 2020 (source: CNC-GFK), and the trend is quite similar in most countries.

    As a consequence, the broadcast and entertainment industry had to face a new challenge: produce more content, with fewer resources available on site, and adapt this content to an international audience. This situation has therefore highlighted the need for tools that allow media professionals to work together, in an efficient way, to continue preparing and exchanging content with partners, quickly delivering programmes to audiences, etc

    What if the collaboration in the cloud between production, post-production houses, broadcasters, etc was the essential solution to manage media content quickly and efficiently, regardless of location? Without forgetting the ecological issues, which are of prime importance today!


    The days when digital tools and the cloud were seen as a scarecrow in the broadcast world are over. The fear of teams being dispossessed of their skills by the cloud seems to be fading, and the advantages and limitations of these new tools are now quite clearly established.

    We observe that every company is now using the cloud but as a by-product of their own solution, and not really for full collaboration.

    The problem is therefore no longer really in the use of the cloud, but in the ability of the various players involved in the creation and validation of content to work together. Content exchanges for the revision and validation stages are still manual, even with the use of digital tools: multiple emails, sending via file transfer platforms, use of collaborative cloud tools for monitoring revisions (mainly office automation), etc.

    All these tools, as efficient as they may seem, are used without any link between them, which leads to manual steps. These steps are still complex to orchestrate, time-consuming, a source of errors, problematic in terms of content security, etc.

    The notion of a single, secure and easy-to-use cloud hub, really designed for broadcast processes, is becoming a real issue, in order to work collaboratively on content and deliver it to playout within the given timeframe.


    Let's take a simple use case: the preparation of multilingual content intended for broadcasting on a linear or VOD platform.

    In simple terms, the process usually involves several actors, from different companies:
    • the rights-holder delivers the program to the broadcaster who bought it;
    • the broadcaster checks that the program meets the required technical standards and specifications, through a Quality Control step;
    • the broadcaster then entrusts the program to a production house or a lab, in order to add subtitles;
    • the lab calls on its translators, mostly freelancers, working remotely, each in a different part of the world, possibly in different time zones, and working at their own pace.

    All these steps are mainly done on business-specific tools; content is exchanged a bunch of times through unique tools to review, comment, modify, review again, etc, until the final validation.
    The lab must ensure that the project runs smoothly, that the timing is respected and that the quality of the work delivered is good, before supplying the final media to the broadcaster.

    On this type of project, there are several pitfalls: how to get a global vision of the work of the freelancers? How to make actors from different companies work together, without wasting time? How to manage multiple versions of files without manual errors? How to keep the budget in check and guarantee deadlines? Finally, the question of the ecological cost of all this trade must be asked, at a time when eco-production is a real issue.

    To address these pitfalls, Videomenthe delivers a fluid, collaborative workflow via a SaaS platform dedicated to the management of media workflows. The idea is to offer all the necessary tools on a unique platform, which all partners can work on, with specific user interfaces. No need to use multiple different softwares - all is provided in a secured cloud interface, answering the need and specifications of the broadcast industry (and also now the corporate one). The content provider and all the partners involved in the global workflow can monitor and view the different steps through which the content is processed.

    The workflow is entirely done on the platform:
    • Basically, the broadcaster uploads the file, launches a technical QC step according to its desired test-plan - a step which is managed by the technical team
    • Then the file goes through an editorial check, managed this time by the editorial team
    • If both technical and editorial checks are ok, the file goes to the transcription & translation steps, managed by the post-prod house / lab
    • The translators have access to the file with restricted rights, according to the language they have to manage
    • Once they reviewed, corrected and validated the file, the post-production house can validate or ask for additional review if needed
    • Once the workflow is finished, the broadcaster can download the ready-to-broadcast content

    The benefits of such a platform are many:
    - A unique platform, to avoid the back and forth of content on external tools
    - Simplified, secure and faster content preparation
    - A reduction of the ecological footprint of these multiple exchanges (digital does not mean no ecological footprint!)
    - A better ROI


    The market continues to evolve and adapt, embracing cloud technologies as an opportunity to ease the way we work, from anywhere, with anyone, whatever the language.
    But today, the answer is far away from being just the cloud: multiplying the digital tools is definitely not the ultimate answer. The key is now in the way we work together on these tools, to increase efficiency, productivity and cost-effectiveness with a “green” attitude.

    At Videomenthe, we’ve been working on this collaboration axis for more than six years, by offering a cloud-based collaborative media workflow platform, Eolementhe©.

    Eolementhe© is true to our DNA: fluidity of workflows, ease of use and collaborative work. Our solution allows a real collaboration between the various stakeholders, capitalizing on the possibilities offered by the cloud tools.

  • Subtitles on my podcasts? And why not!

    In February 2021, the French listened to or downloaded 93.6 million podcasts worldwide (source: Mediametrie). Digital audio is a medium that has developed strongly and deserves our full attention!

    So, what is it about podcasts that makes them so popular?

    If listeners are listening to podcasts more and more, it’s because they appreciate its easy to consume format. It’s a medium that accompanies the listener in his or her daily life: in the car, on public transport, in sport, on a walk, working on a computer, etc. As a result, podcasts represent a simple and effective way of reaching a large audience: 81% of podcasts are listened to according to Médiamétrie, and 9 out of 10 people continue to listen to a podcast after the first episode (source: Opinion Way).

    In addition to its convenient format, this type of media offers completely different content from what is usually seen on videos or blogs.

    The message conveyed is more human and personal, and fosters a climate of trust between the audience and the creator of the podcast. This notion of proximity makes it possible to offer a real listening experience to listeners.
    Thanks to the voice, they feel the emotions and can imagine their own images, like in a book. The message is very easy to remember: 74% of listeners remember a brand mentioned in a podcast, according to the Midroll 2018 study.

    In order to optimise the means of listening, podcasts are generally available on several platforms: SoundCloud, Deezer, Spotify, YouTube, social networks, etc.

    Social networks are an excellent way to promote a podcast and expand its audience. Communities are often looking for educational, pedagogical and informative content. The audience wants to obtain qualitative information quickly and easily. This is why the podcast has gained its place on social media. It informs interactively on different topics such as world news, customer opinions and feedback, experts' words, news in France, testimonials etc.


    However, it is not easy to share purely audio content when the platforms are mainly visual. Not to mention that the audio format effectively excludes an important audience, the deaf and hard of hearing, who represent 16% of the French population (source: https://www.surdi.info).

    Podcasts must then be transformed into visual content, more specifically into video. This is the format most favoured by the communities

    Once your podcast has been turned into a video, it is important to make it accessible to a wider audience. The best solution? Transcribing the audio and adding subtitles! In addition to making the podcast easier to consult, subtitles will improve organic search on web and video platforms, and therefore the visibility of your podcasts thanks to keywords.
    Thanks to the transcript, it is then easy to translate the subtitles into several languages, to further extend the podcast's international reach.


    Our Eolementhe subtitling solution meets 3 needs: transcription in the language of the podcast, translation into 120 languages and subtitle overlay.

    How does it work?

    1/ Upload your video podcast to the eolementhe.com/cc platform
    2/ Choose the language of the podcast and possibly the desired translation languages: the transcription and translations are done automatically
    3/ Correct and validate the generated texts
    4/ Your video podcasts with embedded subtitles are ready, in all selected languages! And you also have access to the subtitle file (.srt format)

    In short, subtitling your podcasts is a no-brainer: more visibility, better accessibility and access to an international audience!

  • Artificial Intelligence & video: the human element at the heart of things

    I’ve been working in the professional video industry for 20 years now, and this is a sector undergoing profound change. The arrival of IT in the 2000s, followed by the Cloud in 2010, has changed traditional ways of working in the industry, from a human, technical and commercial perspective.

    In fact, companies now need a whole new breed of technical staff to set up the infrastructures required. Those with IT and computer networking backgrounds predominate, and “traditional” technical staff and video operators may feel neglected by their management. The advent of the Cloud has only intensified this perception, as it adds to the mix a feeling of lacking these current skills. The use of online services causes further debate, and division between the different teams.

    But this is just the beginning. Artificial Intelligence then bursts onto the scene, creating an additional significant change to be handled. The use of Cloud services places vast computational resources at our fingertips, giving us an enormous data-processing capacity, allowing us to create rules and logic, and carry out sorting operations. AI is set to shake up the human/machine relationship once again, and disrupt the balance.
    AI: what are we talking about exactly?

    An algorithm alone is not considered as AI, despite what some companies promote. Artificial Intelligence actually includes different technologies: Machine Learning and Deep Learning, among others.

    According to Dony Ryanto (source : ‘Machine learning, Deep learning, AI, Big Data, Data Science, Data Analytics’ par Dony Ryanto, Janvier 2019), machine (ML) is a field of artificial intelligence that uses statistical techniques to give computer systems the ability to “learn” (in other words to progressively improve their performance on a specific task) from data, without being explicitly programmed in advance.

    Deep Learning (DL) is an autonomous algorithm based on a neural system that can achieve results as good as or even better than human beings. Deep Learning is used in particular for image and voice recognition, automated translation, medical image analysis, social media filters, etc.
    Artificial Intelligence for video

    18 months ago, my team began integrating AI as part of the range of tools offered by Eolementhe – our collaborative-working web platform – enabling media, marketing and HR professionals to process and deliver videos with ease.

    According to Gartner (source : ‘A Framework for Applying AI in the Entreprise”, June 2017):

    “In general, AI is leveraged into digital businesses in one of six key ways: (1) dealing with complexity, (2) making probabilistic predictions, (3) learning, (4) acting autonomously, (5) appearing to understand and (6) reflecting a well-scoped or well-defined purpose.”

    In the field of video, we can easily identify several areas to which Machine Learning can be applied: prediction, to save time in detecting objects, places and people, and also transcription.

    Here are a few examples:


    Central archives, media libraries and the multimedia resource centers of training organizations, big corporations and institutions, etc., handle and store a large number of videos, which will be repurposed to create fresh content on a particular topic. Then it’s a matter of indexing this content appropriately using key words and images.

    Some examples of this are: identifying and listing all public figures (politicians, sports personalities, actors, etc.) who appear in a video, or identifying locations (such as town, beach, factory, station, etc.) or objects (cars, bicycles, etc.), making it easier to carry out searches for material to illustrate a specific subject (such as a train drivers’ strike, for example). AI permits an automatic selection of appropriate material to be made (for instance, using facial recognition) to facilitate content repurposing. Forget about expensive, time-consuming, error-prone manual indexing: human staff can now focus on higher-added-value tasks.


    Another example: delivery and broadcasting of content on TV channels, web sites, social media, etc., with a preliminary selection made by Artificial Intelligence on video, based on the broadcaster’s predefined criteria. For instance, by picking out locations, faces, words, etc., in accordance with the requirements of special-interest channels or those with a young audience, or restrictions applied in some countries (in relation to nudity, alcohol, etc.).


    Another prospective area for action is getting the Artificial Intelligence to recognize certain terms in a video (terminology specific to a business sector, prohibited/restricted words, brands, etc.) so that this creates a self-learning AI. The goal is to offer a very relevant, efficient transcription service and then generate high-quality multilingual subtitles. Words included in the subtitles can also be used as tags to facilitate media indexing.


    Similarly, metadata extracted by the Artificial Intelligence (such as title, keywords, people featured, transcription, etc.) can improve videos’ organic ranking on search engines and increase their visibility.
    Create a supplementary learning loop to benefit from the best of both worlds

    One of the paradoxes of Artificial Intelligence is that it needs us in order to learn. Forget about the fantasy of the all-powerful, self-aware AI, able to replace human beings in every way. Without the ability to learn, an AI tool will be limited in scope.

    Several multinational corporations are currently working on Artificial Intelligence for video: Google, Microsoft IBM, but also all of the software publishers specializing in a particular area (transcription, etc.).

    In the B2B market, software publishers train their AI tool in-house. There is no process of shared learning for users, in order to limit the risk of errors in the data. The data is key and it’s essential for it to be controlled. (You know the expression “garbage in = garbage out”?)

    On the other hand, some software publishers supply “empty” software. Then it’s up to you to train it according to your own needs. New technologies are furthermore emerging that will enable companies to develop their own learning models, without the (scarce) skills of experts or data scientists. One of these new trends is AutoML, which makes it easy to create machine-learning models.

    At Videomenthe, even prior to incorporating AI, we chose to combine automated Cloud services with the option of manual intervention, to provide our clients with a fast, high-quality result.


    We need to demystify Artificial Intelligence. When used in a relevant, ethical, well-thought-out way, AI can give users more time and eliminate tedious and cumbersome tasks. This is typically the case in the field of video, where human expertise is essential in adding value to content.

    “Ten percent of firms leveraging AI will bring human expertise back into the loop” Forrester Predictions 2019

    Future developments will naturally feature AI, because it will no longer be a buzzword, but an accepted fact of life.

    MURIEL LE BELLAC, Videomenthe CEO

  • How to translate & caption your video? (part 2)

    In our first article « 4 reasons why you need to translate & caption your video », we gave you (we hope) good reasons to subtitle your videos.
    The next step? The implementation… which is less easy!
    Here is the equation to be solved: how to transcript your video into its original language, then quickly, efficiently and cost-effectively create multilingual subtitles? And needless to say, with a professional result…

    Let’s have a look at the kind of professional solutions that can be found on the market.


    Speech to text: the most obvious solution is humans playing back the video and typing out the words they hear.
    It means a lot of time and effort spent: a professional transcriber will work between 5 to 8 hours to transcript 1 hour of audio content (depending on the quality of the audio, the speaking rate, the number of people who are talking, the topic…) (source : https://www.transcriptionetweb.com/transcription-audio-ce-quil-faut-savoir/).
    Not only that, but when talking about video, you also have to synchronize the text with the image, and thus adjust timecodes. Other important point: the subtitles frame, which has to be adapted (one or several lines) to ease the reading.

    Transcription is definitely labour-intensive for post-production teams, journalists…
    Once done, the translation step can start, from the transcription file (that’s why the quality of the speech-to-text is so important since it’s the base for translation!).

    Once again, the human solution is the first one that comes to mind: collaborate with one or several professional translators, who will work from the transcription file.
    Timecodes have to be managed again, and adjusted according to languages to align the spoken speech with image (less words are used in English than in French for example).

    What are the benefits of a human solution? Obviously, a quality and accurate work, both on speech-to-text and translation.
    The main cons? The time, since the whole process involves stop-and-start playback of the media files, and budget


    A simple internet search provides a wide range of transcription and translation tools, from consumer tools to more professional solutions.
    It’s tempting to think that a software can completely replace the human work and avoid your team a time-consuming work of transcription and translation! But let’s face it, miracles solutions, which provide an automated and cost-effective quality result… well, it doesn’t exist yet.
    Quality. That’s the key word. Even the most efficient and accurate automated solutions can’t, for example, contextualize a text.
    It can, however, provide a basis to work on and make your team save time. But either way, we need to give people the ability to correct and validate the content.
    All this leads us to a third option, which combines best of both world.


    A few years ago, there was room for an innovative Saas solution to easily create automated workflows from a web portal, as nothing really efficient was able on the market. So, we did it: Eolementhe© was born in 2014, and now includes a mix of automated tools and human actions.
    Eolementhe© is a collaborative media tool box, which allows audiovisual professionals to easily process and deliver Tx-ready files, while facilitating the collaborative work thanks to pauses, permitting technical/ editorial corrections and validation.
    Eolementhe© offers multi-providers tools to transcript, translate and subtitle a video (+70 languages), through an intuitive interface, usable by anyone. And cost-effectively, of course.

  • 4 reasons why you need to translate and caption your videos (part 1)

    Audio is an essential component of video. Of course. Words and music strengthen the informative and emotional impact of video, whatever the final use (corporate communication, news media, advertisement…).

    Several statistics show yet that sound is becoming more and more intrusive for many users (me included, what about you?). An example: 85 percent of Facebook video is watched without sound (source: Digiday). These silent videos rule over Facebook, simply because the social network has launched in 2017 the “autoplay” feature, which automatically launches videos in your timeline… without turning on audio.

    Browsers, like Chrome for example, also decided to block autoplay videos with audio (it already happens to you, I’m sure: you are in a public place and an unexpected video is launched with the sound on? You quickly close the app or site you are on. Disastrous effect for the brand or the media!).

    In this context, subtitling videos is now mandatory to speak… in silence. Here is an overview of the main reasons to go on with it!

    1/ Accessibility, for deaf and hard-of-hearing audiences

    Clients, partners, employees…, it’s obvious, deaf and hard-of-hearing people are the first interested by subtitled video content. And without subtitles in your videos, that’s a huge audience you won’t reach.

    2/ You have no choice, it’s mandatory for your video marketing strategy…

    In 2019, a video for social network has to be subtitled.
    Video is now watched everywhere, on every device, it’s not a secret. The user, professional or not, wants to watch video content, even if he/she’s not in the position to listen to it (public transport, open space, waiting room, conference…). That’s why captions are essential to allow every user to serenely watch your content on mobile, laptop or pad.
    Netflix, for instance, well understands this and uses it to catch audience attention: thanks to a wise mixing of captions and striking visuals, the brand creates impactful trailers without voice or music, that are spread on social networks.
    Brands and medias start rethinking their creative process to deal with that reality. What about you?

    3/ It’s a first step to internationalize your content

    Once your video is subtitled in the original language, the translation into several languages is made easier. The subtitles file then allows a translation by an automated software, a translator or a mix of both ways (we’ll deal with that topic in the second part of this article, to be published soon).
    Whatever the final goal, external or internal communication, the translation allows to reach an international audience (yes, the world is yours!).
    Your company has affiliates in several countries? Well, think about all these internal events (seminar, training session…) which are worth sharing with your teams in their native language, wherever they work.

    4/ Captions improve video SEO

    You might be surprised, but subtitles can have a positive effect on organic search of your videos. Here is why:
    • subtitles allow to reach a wider audience, and then to increase views, which is a criteria used by Google and co to rank your website,
    • website including video is 53 times more likely to appear on Google page (you can trust Forrester study!)
    • by adding your subtitles file to your video platform (Youtube for example), you will increase the impact of the video keywords

    To cut a long story short, subtitling your video is worth it!

    Ok, but in practice, how to do it? The second part of this article will be dedicated to the solution to add great professional subtitles on you videos , quickly and effectively.
    Stay tuned…
    Written by Sandrine Hamon

  • Media professionals, how do you validate broadcast-ready content?

    Videomenthe uses its distributor background of broadcast solutions and services, to design its own media workflow solutions dedicated to the audiovisual industry.
    The company keeps adding features to Eolementhe, its key platform dedicated to the workflows creation, processing, validation, sharing and delivery of media files. 
    The new one includes a ‘Media Library’ function and keeps the advantage of the extremely easy-to-use and friendly interface of Eolementhe, allowing every user to prepare and deliver video files.
    The Eolementhe’s ‘Media Library’ offers basic functions to prepare a video: selection of a thumbnail, the possibility to fill the editorial metadata and generate automatically the technical metadata with the option to download a report file; insert a trim with TC in/TC out to generate a video clip. 

    “Content is king”… Bill Gates said it in 1996, and this statement still echoes in so many sectors!
    Regarding the media industry, the growing number of files and formats to process is redefining our working methods, that’s a fact. Create and deliver high-quality media files is a brain-teaser for both, content providers and broadcasters. On top of that, the proliferation of technical mandatory specifications regarding files delivery, do not facilitate the fluidity of files processing operations. 
    Thus, validation processes are time-consuming, pricey and files exchange methods are often unsecured. Not simple to prepare media files… And, the key to success and we all know it, is to deliver a mass of high quality contents. So -in reference to a trendy word- the validation step has to be ‘agile’ with new tools streamlining the collaboration, making faster and smoother this crucial step prior to the video playout.
    So, wouldn’t it be great to have a common workflows solution allowing both content makers and distribution platforms to validate compliant media files easily?

    For instance, when having journalists in the field, files processing operations are difficult to integrate into a routine, because of time constraints, the equipment they have at their disposal etc. 
    So, one of the solution consists in ingesting the video on the ground, filling the elementary editorial metadata in order to resume the file processing later. With Eolementhe, our solution to process media files, and thanks to its pause option, back at the office, the journalists pick up where they left off.
    After connecting to Eolementhe, they click on the Media Library tab. This feature permits to stream the video, extract a screenshot, add comments per timecode, complete the editorial metadata, generate both technical and editorial reports and also create video clips. Once the editorial part is finished, the users set off its technical workflow, for example with transcoding, quality control, subtitling insertion and delivery. After only a few clicks, the video is prepared and ready to be distributed on several playout platforms.

    Another use case is the validation process between partners (internal and external): let’s take the example of a video delivery from production or post-production houses to TV channels. Before getting the broadcast-ready file, the video has to be editorially and technically approved. Consequently, several back and forth of files are performed before getting the video stamped ‘broadcast-ready’.

    Eolementhe makes this communication between internal and/or external partners easier and faster. Indeed, it enables both, the editorial validation with the media library feature ; and the technical validation of files with the workflow editor. Partners exchange their comments and files via Eolementhe to get the ‘broadcast-ready’ files. In addition, several standards are available via pre-filled metadata forms, allowing non-technical users as well, to create technical compliant files.

    Thanks to the single web-based interface of Eolementhe, reachable via a browser, the users, create compliant files including technical and editorial metadata, process, share, deliver and receive media files. So, Eolementhe reduces considerably the time to air by providing a platform focused on cost and time saving.
    So, media professionals, seek an easy-to-use solution to prepare and validate your media files? We guess you found it with Eolementhe… There is only one thing left to do, watch the demo!

  • Disruption? Certainly not! Continuity and flexibility are what we’re all about!

    Videomenthe's CEO, Muriel Le Bellac gives her point of view about "disruption"

    We live in a world in constant flux, and that’s a fact. Technological progress in the fields of both IT and telecommunications is changing traditional working practices significantly.

    Our world of professional broadcasting must meet the challenge of digitalisation of content and it is up to us to reassess methods of working in order to take advantage of these technological developments.

    Should we be alarmed by this? Because it means switching over from tape to file, from video converter to transcoder, from oscilloscope to probe software, from terrestrial broadcasting to internet streaming, from linear consumption of content to catch-up or on-demand, and also from TV screen to laptop, tablet or mobile phone screen, and so on.
    The content itself, however, must always go through the same stages of capture, editing and compliance prior to transmission live, by catch-up or on-demand.

    * So why talk about disruption?
    Is it not just a fantastic process of evolution that gives us the kind of freedom of audiovisual content consumption that was unimaginable just 20 years or so ago? After all, VOD has only been around since the beginning of the new millennium…
    Setting out from this starting point, as time has gone by it has been necessary to replace first the dedicated hardware platforms processing analogue sources, then digital baseband sources, with servers employing software solutions that handle live internet streaming and media files.
    Then came the tremendous resources provided by the arrival of the Cloud and high-speed internet…
    This made our developers very happy, but caused anxiety for our technical teams, fearing that they would be stripped of their own local infrastructure.
    One of the approaches is to view the Cloud, in the first instance, as simply an extension of the on-site facilities, making it possible to handle work overflow and peaks in activity, without over-expanding local infrastructures, which would then be operating well below capacity for most of the year.

    * And what about continuity?
    The idea then came about of using a dedicated portal for our work, employing the same tools as those used internally: the Cloud now starts to display signs of continuity and flexibility.

    Similarly, if we extend this reasoning to its logical conclusion and use this same workflow creation portal to run our own calculation farms in-house, from the point of view of users, only the interface and the resources as a whole matter. They are not concerned about the share of the load between their infrastructure and the one made available to them in the Cloud. Continuity and flexibility are our motto and form the foundations of Eolementhe©

    We are living in an exciting time which is allowing us to push back the limits in terms of processing and making our content available via multiple devices. We should take advantage of it to exploit every solution, every tried-and-tested revolutionary method, in a progressive and flexible way, to gain the greatest benefit from it.

    The On-Premises/Cloud hybrid approach without a doubt forms the basis for the new broadcasting practices of the future.
    The world of the media file allows us to employ far greater technical resources than ever before and to catch a glimpse of future links between our “On-Prem” ecosystem and the Cloud which could become quite seamless.

    “Adapt or Die”, the motto governing my work at the beginning of my professional career, which focussed on the transition from SD to HD video in the ‘90s, is once again highly applicable today: the telecom and Cloud networks offer us the prospect of mass broadcasting via multiple platforms and represent an incredible toolkit for developing our creativity and designing new packages in our field, which has so much to offer the media.

    So, let’s stop talking about disruptiveness and forge our way towards flexible interaction and wealth of content!

  • Episode 2! Project Management 3.0

    Next article! You haven't read the previous episode? It is the next one!

    Managing a project with GitHub
    The GitHub interface is easy to use and provides all of the tools required to manage a project. From management of resources, to sprint planning, to code management, there is no need for any additional tools.

    Team management made simple
    It’s very easy to create teams, add developers and assign them to one or more projects.
    Issues, for doing everything
    An issue is like a ticket, and may cover functionality, bugs, improvements, questions etc.

    GitHub issues
    • An issue may have tags such as: feature, bug, improvement, design, question, etc.
    • An issue may be assigned to one or several members of the team.
    • An issue has a description and everyone can discuss it, add comments, ask questions and so on.

    Milestones for each sprint
    GitHub milestones
    Stages can be created called milestones, which represent the sprints. Each sprint has a start and end date and contains a list of issues to be dealt with. The percentage of work completed is updated automatically as the issues are closed.

    A Kanban board, for an overview
    Kanban board
    This can be used to help organise the sprint, to check rapidly on the tasks that are in progress, those that have been completed, and to see who is working on what.

    Readme and Wiki, to document everything
    The readme of a project sets out all of the important information before starting. It allows you to know where to start and gives quite a precise idea of the overall operation. The wiki allows you to go a bit further into the details.

    Graphics, for easy analysis
    These enable you to have an overview of the work on each project, including, amongst other things:
    • Contributions, by developer
    • Code frequency, by day/week/month.
    • Table of commits, by branch

    Pull requests, to keep code separate
    Using pull requests, developers each work on their own version of the code and there is no risk of breaking the master branch which contains the production code. So, after thoroughly testing their code, the developers send a request to the person managing the master branch to add their modifications. The code is now analysed automatically, then manually, before being added or rejected.

    Searching the project
    The search field allows a term to be searched throughout the project. It can be filtered by issues, tags, commits, code, wiki etc.
    Using just the one tool for all of the aspects of a project is a real advantage. GitHub offers a limited number of functions, but includes the essentials, and everything works perfectly.
    Conclusion? As simple as:
    • Use GitHub to manage your projects.
    • Use Scrum sprints!

  • Project management 3.0 by Rémi!


    1. The traditional method
    This involves analysing the requirements, defining a product on the basis of a comprehensive set of specifications, formulating the product, developing it and testing it. A project may thus take several months to several years. The project manager has to set the key milestones for each stage. The study and analysis stages remain completely theoretical until the developers encounter problems that no one could have foreseen. The time taken up by the development stage of a project is therefore particularly unpredictable and often exceeds the duration and/or budget originally allowed for it.

    2. Agile methods

    These methods began to be developed in the United States, in 2001, with the dawning realisation of the increasing number of project development failures. So, a team of experts met and drew up a strategy setting out the approach that should be adopted to adapt the most readily to change and thus to respond best to clients’ expectations.
    The principle on which these methods are based is to divide the project up into several iterations, which cover the functions that come together to make up the final product. There are several advantages to working on a number of mini projects rather than one large project:
    • Developers no longer feel as if they are not making progress
    • The clients are regularly involved in the test stage of each function and give their input as required.
    • A “virtuous circle” is established.

    The Scrum approach
    Scrum is the most popular of the Agile methods. It involves defining all of the functions of the application in a backlog. The development team meets every 2 or 4 weeks to establish the functions it will implement during the coming 2 or 4 weeks. The cycle is repeated until the backlog tasks have been completed. These iterations are known as “sprints”. Once a sprint has been completed, there is a debrief and then the work begins again on a new set of functions.
    I find this method worthwhile, if you cut out all of the meetings it involves. Here is a list of all the occasions when the team has to meet for discussion:

    • Sprint planning: This is when the priorities are established for the upcoming sprint (allow half a day per sprint)

    • Daily scrum: This involves a meeting lasting several minutes every morning so that everyone involved can talk about what they did the day before, any difficulties they experienced and what they are going to do today. In theory, it shouldn’t last more than 15 minutes, but all it takes is for one person in the team to have a habit of talking for a bit too long and your morning is already almost over.

    • Backlog refinement: This involves estimating the time each task will take. This duration calculation is estimated in terms of points. So, in addition to being completely arbitrary and incomprehensible (1 point = how many hours?), the time spent on calculations in relation to functionality would often have been long enough to cover actually developing it (allow half a day per sprint).

    • Sprint review: This is the time set aside for reviewing the sprint that has just been completed. The conclusion: it is often concluded that the number of tasks scheduled was excessive and plans are made for the following sprint to be perfect! (The following sprint in fact goes as poorly as the previous one, and half a day per sprint must furthermore be set aside).

    All of these meetings are very time consuming. On average, a team of 4 developers loses “10 man-days” in meetings, every two weeks.
    The expression “agile” has become meaningless. In theory, the team ought to be able to adapt immediately to change. Unfortunately, I have attended meetings at which the project manager could not add a small task to the current sprint because this went “against Scrum methodology”.
    To conclude, the rapid sprints and iterations proposed by the Scrum method represent genuine progress in the management of projects. On the other hand, the excessive number of meetings and the lack of adaptability diminish the benefits of this method.

    In the next episode, I will talk about GitHub!