Tools

Perception testing stations

The Speech and Language Laboratory has a long history of research into auditory perception and has recognised expertise in the development of numerous test management tools.

The PERCEVAL (PERCeption EVALuation Auditive & Visuelle) station is an automated auditory and visual perception testing station developed at the LPL. It provides a complete environment for preparing, configuring, conducting and collecting data from the experiment. PERCEVAL allows experiments to be designed in which the subject is exposed to a series of visual and/or audio stimuli. It is therefore particularly well suited to the study of speech and language perception.

Alain Ghio and Carine André, ‘PERCEVAL-LANCELOT Technical Data Sheet’, TIPA. Interdisciplinary work on speech and language [Online], 38 | 2022, Online since 27 January 2023, connection on 04 November 2025. URL: http://journals.openedition.org/tipa/5705; DOI: https://doi.org/10.4000/tipa.5705

The ‘LiveIntell’ software was developed as part of the DAPADAF project and allows for ‘live’ intelligibility measurements. It is designed to display text on a screen that is only visible to a speaker (e.g. a patient) who must pronounce the statement. A listener who cannot see the statement must transcribe it based on what they have understood aurally. The speaker's recording is saved and pre-segmented into sequences. Designed for measuring intelligibility in real time, it can also be used as a corpus recording module with pre-segmentation of sequences, which facilitates audio data processing.

Physiological investigation stations for speech production

The EVA Assisted Speech Evaluation system is a device for recording and analysing speech production mechanisms. It is part of multi-parameter speech analysis, which consists of observing speech production mechanisms not only in their acoustic dimension but also by combining other sources of information from complementary sensors providing a synchronous signal that is not redundant with the speech signal. These channels may be aerodynamic (air flow through the oral and nasal passages, intraoral and subglottic pressure) or electrophysiological, such as electroglottography, which involves observing the closure of the vocal cords by measuring electrical conduction.

This station is primarily used in phonetics to study speech production mechanisms, both in the laboratory and in the field (Amazonia, Tanzania, etc.). However, its use has become widespread and commonplace in the diagnosis of vocal pathologies. It is based on the study of most speech production parameters: sound, pitch, voice intensity, etc.

Equipped with numerous sensors that enable these measurements to be taken, it helps practitioners refine their diagnoses and monitor surgical procedures, pharmaceutical treatments and rehabilitation. Current work focuses on complex clinical situations, particularly Parkinson's disease and vocal strain, which are ‘difficult’ models for measurement methods and help to refine and validate procedures.

A version 3 of the device is currently being developed, notably with the GIPSA-Lab in Grenoble.

Alain Ghio, Bernard Teston, Antoine Giovanni, François Viallet, Yohann Meynadier and Didier Demolin, ‘EVA Technical Data Sheet,’ TIPA. Interdisciplinary Work on Speech and Language [Online], 38 | 2022, Online since 27 January 2023, connection on 04 November 2025. URL: http://journals.openedition.org/tipa/5675; DOI: https://doi.org/10.4000/tipa.5675

Editing, speech signal processing, phonetic annotation

The observation, editing, processing and annotation of signals has always been a technical concern of prime importance for a laboratory that has to study large masses of acoustic and physiological signals. 

PHONEDIT is a signal editor developed for voice and speech research. It is used to visualise, segment, mark, measure and process acoustic, aerodynamic, palatographic and kinesiographic parameters. PHONEDIT can read most files from the various computer systems used by phoneticians. It is available on the laboratory server as a free, downloadable version.

Alain Ghio, Robert Espesser and Tatsuya Watanabe, ‘PHONEDIT Technical Data Sheet’, TIPA. Interdisciplinary Work on Speech and Language [Online], 38 | 2022, Online since 27 January 2023, connection on 04 November 2025. URL: http://journals.openedition.org/tipa/5713; DOI: https://doi.org/10.4000/tipa.5713

Annotations

SPPAS is free scientific software dedicated to the annotation and automatic analysis of speech. Developed since 2011 by Brigitte Bigi (CRHC CNRS), it provides a comprehensive and customisable solution for processing audio or video recordings. Its aim is to facilitate and accelerate work in linguistics, phonetics and language sciences by automating time-consuming and complex tasks such as the segmentation of speech into phonemes, words and syllables.

Free and open source, SPPAS guarantees transparency and reproducibility: users can view, modify and redistribute its code and linguistic resources. Versatile and multilingual (French, English, Italian, Chinese, etc.), it also allows annotations to be converted and analysed in a wide range of formats (TextGrid, Elan, HTK, Sclite, etc.). SPPAS offers a user-friendly graphical interface, command line tools and a Python API for advanced users.
Awarded the 2022 Open Science Prize for Free Research Software (special mention from the jury), SPPAS is a benchmark tool for open research, supported by the LPL.

New developments are underway, including the development of a web application hub and an automatic annotation system for French Sign Language (LfPC).

Brigitte Bigi, ‘SPPAS Technical Data Sheet’, TIPA. Interdisciplinary Work on Speech and Language [Online], 38 | 2022, Online since 27 January 2023, connection on 04 November 2025. URL: http://journals.openedition.org/tipa/5745; DOI: https://doi.org/10.4000/tipa.5745

Communication assistance

The PCA or Alternative Communication Platform is software that assists with verbal and non-verbal communication. The software offers several solutions adapted to the user's communication abilities and degree of motor skills:

  • Verbal communication in writing
  • Non-verbal communication using icons
  • Accessibility via keyboard, mouse or scrolling
  • Motion sensors to control the interface
  • PCA is used by many individuals, medical and social professionals, and care facilities. PCA was designed and developed at the Speech and Language Laboratory (CNRS-AMU). Distributed by Aegys from 2003 to 2011, the PCA software is now available for free.