Envion is an ecosystem developed for Pure Data (PlugData) and Max/MSP by Cycling ’74, designed for algorithmic and procedural composition, musique concrète, and experimental sound processing.
💡 Feedback is welcome!
Please feel free to test, comment, or open issues on GitHub — your input helps improve Envion.
🔖 Note on Licensing Envion is released under the MIT License with Attribution. You are free to use, modify, and redistribute this project, including for commercial purposes, as long as you clearly attribute the original project name Envion and the author Emiliano Pennisi. See the (LICENSE) file for details.
Envion is an envelope‑first engine for Pure Data (Pd): it drives the read index of stereo buffers through textual sequences of triplets (value, time, delay) sent to vline~.
Each line of a text file represents a complete envelope; switching line means switching gesture.
The core philosophy behind Envion lies in its invitation to slow down — to explore sound through micro fine-tuning and patient listening. Rather than chasing immediacy, the instrument rewards those who take time to sculpt and observe its evolving behaviors. Approaching Envion too quickly or superficially will rarely yield meaningful results, as its depth unfolds only through careful attention and sonic restraint.
Envion — Official Video Playlist
Playlist
Explore Envion through focused demos, process breakdowns, and performance clips.
Each video dives into envelope-first articulation, NET-AUDIO random sourcing,
and real-time strategies for generative sound design.
Best experienced on headphones or a good stereo setup.
Engage: Start from the latest uploads or browse the full series to follow
the evolution of presets, stretch logic, Dynatext envelopes, and live performance workflow.
Before you start — Envion is not a single .pd file
Envion will not work if you only download the Envion_v5.1_Plugdata.pd file.
The patch depends on its full folder structure (libs, core, utils, netsound, data etc.).
You must download the entire repository.
When the patch is outside its directory tree, PlugData cannot resolve its abstractions and
support files — resulting in silent loading or missing modules. Keeping the folder intact
ensures everything loads correctly.
Once the dependencies are installed, you can simply turn on the DSP and load the first preset, as shown in the image.
Envion may look complex at first glance, but it’s actually very easy to start making sound.
You don’t need to understand every module right away — just activate the DSP and try one of the included presets.
Start with the main master preset, conveniently located to the right of the DSP activation (the large bang button)
Local presets on the right
Network-based presets on the left (fetching sounds directly from the web)
Ability to load your own samples into any preset
Take your time to explore the deeper functions later — for now, focus on playing, listening, and discovering how Envion responds. Exploration is part of the philosophy, but sound comes instantly once you start.
Envion — Quick Start overview
Xenakis — Stochastic Sound Masses ↔ Envion
Hand-drawn diagram by Iannis Xenakis for Pithoprakta:
horizontal axis = time, vertical axis = velocity/pitch, each line is a glissando-trajectory inside the sound mass.
This sketch shows how Xenakis shaped a sonic mass using physical parameters
(temperature/pressure → probability of movement). He is not “writing notes”, but
drawing trajectories: dense overlapping curves whose collective behaviour
becomes the musical form.
Parallel with Envion (algorithmic music):
Trajectory (Xenakis) → Dynatext in Envion: amp–dur–offset triplets sent to vline~ as gesture envelopes.
Statistical mass → Random List / Random Terna: controlled variability producing emergence instead of repetition.
Physical model → Procedural model: Envion separates sound material (sample/web) from articulation (the gesture), i.e. forces applied to matter.
In other words: just like Xenakis, Envion does not compose “notes”
but behaviour. Sound becomes the result of evolving dynamics,
not a fixed sequence — a living form shaped in real time.
Musical Gesture Theory — why this matters to Envion
The seminal text Gesture–Music by Claude Cadoz and Marcelo M. Wanderley
partly inspired the envelope-first design of Envion.
Their view of instrumental gestures as an interplay of action/energy, perception, and meaning
aligns with Envion’s approach: envelopes, slicing, and mappings behave as digital gestures
written onto audio rather than merely playing files.
Ergotic (action/energy): envelopes and triggers impart force to the material.
Epistemic (perception): trajectories shape how motion and form are perceived.
Semiotic (meaning): mappings and presets articulate musical intent.
In short: Envion writes trajectories on sound. This envelope-driven, gesture-centric view
helps explain why a single fragment can yield thousands of distinct, evolving articulations.
Reference — Cadoz, C. & Wanderley, M. M.,
Gesture–Music.
Notes on this documentation
Envion is in continuous development.
Some aspects of this documentation may change over time.
NET-AUDIO is Envion’s web inlet for sound: a curated stream of random audio atoms fetched from the internet and articulated by your Dynatext ternary envelopes. Source is unpredictable; gesture is yours.
Why it matters
Procedural & non-deterministic: every list is different; performance stays alive.
Freesound-powered: a lightweight wrapper returns direct preview URLs from Freesound.
Artist’s philosophy: separate material (the web) from articulation (Dynatext) to focus on form, gesture, and meaning.
Learn how to generate fresh lists, load them in Envion, and map the first 8 entries to the module’s slots.
NET-AUDIO inside Envion — random sources, Dynatext envelopes.
Found Net Sound — Main Concept
Envion is inspired by the principle of Found Net Sound: a contemporary
rereading of Pierre Schaeffer’s objet sonore translated into the era of the network.
Here, sound material is not “sampled” as a closed archive, but intercepted, traversed,
and re-circulated.
The approach is cybernetic and gestural: the system does not treat audio as a static
object but as something that appears through relation — captured in the moment it
passes through the mesh of conditions that make it audible.
What normally stays invisible — buffers, temporary streams, cache fragments,
corrupted previews, residues of circulation — becomes living material, articulated
not by selection but by encounter.
In this view, the sound-object is no longer “chosen” by the composer:
it meets the listener. Emergence replaces collection; gesture replaces inventory.
This is why NET-AUDIO is not a browser of samples but a channel of emergence:
the network itself becomes the site where the sound-object is found — the first imprint,
before any act of composition.
EnvionSeeder — web console for random fetching, slicing and preprocessing of found sound material.
EnvionSeeder represents the networked counterpart of the Envion ecosystem —
a living infrastructure that unites fetching, transformation, and ontological depersonalization
of sound within one integrated flow.
It connects Pure Data’s gesture-based environment with an online backend for
procedural harvesting, normalization, and erasure of source identity.
What it does
Fetches random or query-based sounds from curated APIs (currently Freesound)
Performs automatic slicing and normalization to prepare material for Envion
Provides a live web console that displays query terms, fetched filenames, and real-time process logs
Builds an ontological archive of depersonalized sound fragments
The web interface of EnvionSeeder includes a live console that mirrors the Python process:
users can either insert a search term or trigger a random query.
Each action prints the entire process — from the API request to the transformation of the sound fragment —
revealing the inner metabolism of the system.
Ontologically, EnvionSeeder extends the concept of sampling without awareness —
a practice in which the origin of the sound is intentionally obscured.
The system fetches audio atoms scattered across the network and rewrites them through
automated processes that dissolve authorship, semantics, and identity.
The console operation parallels the local Python scripts used by EnvionFoundry.
Every query is executed automatically, generating a list of raw URLs that point to
fragments of found sound distributed over the network.
Once fetched, a secondary process based on ffmpeg extracts random
slices and subjects them to several pre-treatments:
Cutting the source into short random slices of a few seconds
Assigning non-readable filenames (random alphanumeric codes) to erase semantic traces
Applying extended fade-outs and short fade-ins to dissolve temporal edges
Pitch-down transformation: asetrate=16000, aresample=48000, atempo=0.7 — slow, deep drone-like textures
Together these actions constitute a procedural act of erasure and reinvention:
sound is detached from any identifiable origin and reintroduced into the Envion ecosystem
as pure gesture, pure envelope, pure potential.
“Sampling without awareness” thus becomes not a technical shortcut but an aesthetic and ethical stance —
a refusal of the recognition economy that governs contemporary digital culture.
The resulting fragments exist as anonymous carriers of motion inside Envion’s network of gestures.
For now, EnvionSeeder is freely accessible — users can experiment with the console,
launch random queries, and experience the ontology of procedural sound fetching.
This web-based access layer is meant as an open gateway into Envion’s world.
EnvionFoundry — the core local environment that performs advanced audio processing,
ffmpeg-based transformations, and manifest generation — represents the future research-tier of Envion.
While EnvionSeeder opens the system to the network, Foundry unlocks its full procedural power:
autonomous slicing, directory scanning, and large-scale sound recontextualization.
Access to Foundry will remain limited and may require a small contribution to sustain
the server infrastructure and ongoing research.
In short: Seeder lets you explore, while Foundry lets you build.
Both are part of the same ontological continuum — the generative, non-semantic ecology of Envion.
EnvionFoundry — core local environment for advanced audio processing, slicing, and procedural re-contextualization.
Reference Audio Profile — Perfect Parameters for Envion
The file FILE-MASTER-TIME.wav (≈9.2s) is used as the main
reference material for articulation analysis inside Envion.
It represents a perfect balance between micro-rhythmic density,
spectral brightness, and envelope-based structure —
ideal for Envion’s ternary seeding logic.
Waveform — seven main articulation peaks (perfect for auto-slicing).Spectral energy centered around 1.4 kHz — bright and dynamic for layering.
Why this file works so well
Contains 7 intensity peaks defining natural articulation nodes
Balanced RMS energy with clear transients
Weighted mean frequency ≈ 1450 Hz
Duration within 8–12 seconds, perfect for Foundry slicing
Responds organically to DF-SEQ and Entropy Feeder modulation
Each intensity peak can act as a gesture seed inside Envion,
while the decaying parts serve as procedural silences for dynamic breathing.
This internal articulation makes it ideal for lowercase, acousmatic, or
entropy-driven environments.
External Link
This demonstration shows the FILE-MASTER-TIME.wav reference sample inside Envion.
You can clearly observe how the ternary envelopes interact with the waveform —
each peak acting as a natural gesture seed while the decay phases produce
micro-breathing silences that make the sound feel alive and organic.
The envelopes follow the declared terna logic precisely, resulting in
an almost musical phrasing despite the procedural nature of the system.
Click the button above to open the full video directly on YouTube.
In this video you can also hear how Envion’s foundry-prepared fragment maintains
its internal balance between density, brightness, and envelope responsiveness.
Even under fast or randomized modulation, its articulation remains coherent —
a rare combination that makes this file the perfect benchmark
for testing ternary seeding and procedural gesture interaction.
DEEPSCAN — Domestic Found Sound Loader
DEEPSCAN is Envion’s local inlet for sound: instead of fetching material from the network, it excavates the hard-disk and resurfaces what usually remains hidden inside DAW projects — frozen stems, backup renders, muted layers, rehearsal takes, caches and discarded attempts that were never meant to “exist” outside the session.
This is domestic archeology: highly alive material that carries the memory of the work process itself — not curated, not polished, just resurfaced as raw sonic residue ready to be articulated.
How it works
scans a chosen local directory (even entire project folders)
maps up to 8 active slots in a circular memory bank
each fragment is injected into Envion’s stereo arrays (not streamed)
can run as an autonomous 8-slot sequencer (clocked autoplay)
or be fully gestural when triggered manually (0) through Dynatext
internal micro-sequencer logic: slot rotation via round-robin/random; when autoplay is active Deepscan behaves like an algorithmic drum-machine, while in gestural mode the arrays are driven entirely by Dynatext articulation; once injected, the signal travels through Envion’s internal routing matrix (reverb, distortion, tape-echo, etc.).
By opening the console you can also see the absolute paths of the resurfaced files, making it easy to recover or reuse something that Deepscan has unearthed.
Deepscan inside Envion — 8-slot memory for domestic found sound.
Support Envion — Through Its Sound Archive
Envion is entirely free and open — no paywall, no premium tier, no closed modules.
The sample archive is the only way to financially support its continued development.
Support Envion through its Sound Archive
A curated collection of source material — more than “samples”, these are
imprints: traces of gesture, transformation and process, built within the philosophy of Envion itself.
If Envion has value for you — artistically, technically or philosophically —
this archive is the most direct way to keep it alive and evolving.
(No subscription — a one-time gesture of support that circulates back into development.)
Envion on iPadOS/iOS
Quick Start Guide
Envion works perfectly on PlugData for iPadOS and iOS devices. The core functionality is fully operational without requiring any additional library installations.
Shorts
Envion running on PlugData for iPadOS/iOS.
Full functionality: presets, sample playback, stretch, randomization, and effects.
Understanding the Dependency Warnings
When you open Envion on PlugData for iPadOS, you may see warning messages about missing dependencies. Don't worry — this is completely normal and expected! These warnings indicate optional libraries that add extra features on desktop systems but are not required for Envion's core functionality on iPad.
iPadOS Sandbox Notes
On iPadOS, PlugData runs inside its own sandbox. This means it cannot access arbitrary folders on your device — it can only read and write inside its own Documents directory.
Why this matters
If the audio/ or data/ folders are placed outside of the sandbox, PlugData will show errors such as: [soundfiler] read ... Operation not permitted can't open file ...
How to fix it
Copy the entire Envion repository (including audio/ and data/) into the PlugData sandbox.
You can do this via:
Finder/iTunes File Sharing (PlugData → Documents)
iCloud Drive (place the folder in PlugData/)
Files app (any location accessible by PlugData)
Always use relative paths (e.g. ./audio/sample.wav) instead of absolute system paths.
What the warnings mean
The warnings refer to these external libraries:
ggee
ceammc
simplex
audiolab
These libraries are:
Optional — they add extra features but are not required
Desktop-only — they cannot be installed on iPadOS/iOS through PlugData
Safe to ignore — you can dismiss these warnings and use Envion normally
Known iPadOS Limitations (Important)
There are a couple of modules that are not functional on iPadOS at this time due to limitations in the PlugData runtime or because they depend on desktop-only libraries:
Net-Audio module: Not functional on iPadOS in this release. PlugData on iPadOS/iOS currently lacks support for URL-based loading and fetching which Net-Audio relies on, so the module remains inactive until support is added upstream. A related issue has been opened in the PlugData repository — follow the PlugData issue tracker for updates.
Dynagran module: The new Dynagran module is not yet compatible with iPadOS. It depends on features and/or optional libraries that are not available in the current iPad runtime, so Dynagran will remain inactive on iPad for now. Use Dynagran on desktop systems where required libraries are present and watch for updates.
These limitations are platform/runtime issues rather than problems with the Envion patches themselves.
What Works on iPadOS (Without Additional Libraries)
Full envelope sequencing — All dynatext functionality
Audio playback — Complete sample manipulation and playback
All presets — Load and use all included presets
Recording — Real-time recording of your output
Manual triggers — KEY-1 through KEY-5 controls
Automatic mode — Random list and random terna selection
Stretch controls — Time-stretching and envelope scaling
Matrix mixer — All routing and mixing features
Effects — Echo, reverb, distortion (Nuke module)
What Requires Optional Libraries (Desktop Only)
3D scope visualization — Requires simplex library
Advanced audio features — Some enhanced features require audiolab
Extended utilities — Certain additional features require ceammc and ggee
Important: The absence of these libraries does not affect the core envelope sequencing, sample playback, or preset functionality of Envion.
Built-in Libraries
PlugData includes these libraries by default (on all platforms including iPadOS):
cyclone — Used for gate~ objects and routing
else — Used for LFO, reverb, note labels, and various utilities
How to Use Envion on iPadOS
Download the Envion repository and transfer the patch files to your iPad
Open___ Envion_v3.9_Plugdata_WIN-Ipad.pd in PlugData for iPadOS
Dismiss any dependency warning dialogs that appear
Load a sample using the "BROWSE audio" button
Turn on DSP (if not already on)
Play presets from the bottom-right preset section, or
Use manual triggers with KEY-1 to KEY-5
Transferring Files to iPadOS
You can transfer Envion files to your iPad using:
iCloud Drive — Place files in your PlugData folder
Airdrop — Send files directly from a Mac
Files app — Use any cloud storage service (Dropbox, Google Drive, etc.)
iTunes File Sharing — Transfer via USB connection
Once transferred, open the .pd files directly in PlugData.
Performance Tips for iPadOS
Start with preset — The presets work fine — let me know if anything’s wrong
Monitor CPU usage — Some iPads may struggle with very complex patches like Envion
Close other apps — Free up system resources for better performance
Adjust buffer size — In PlugData settings, if audio glitches occur
Troubleshooting
Problem: Dependency warnings appear on startup
Solution: This is normal! Simply dismiss the warnings and continue using the patch.
Problem: No sound is produced
Solution:
Check that DSP is turned ON (toggle in PlugData)
Verify that a sample is loaded into the buffer
Ensure your iPad volume is up and PlugData has audio permissions
Problem: Patch won't load
Solution:
Make sure you're using the PlugData version: ___ Envion_v4.0_Plugdata_WIN_ipad.pd
Verify you have the latest version of PlugData for iOS
Check that the file wasn't corrupted during transfer
Problem: Missing preset files
Solution:
Ensure you transferred the entire Envion folder structure
The /data folder contains all the envelope preset files (dynatext)
The /audio folder contains sample audio files
Problem: Net-Audio or Dynagran module shows no activity
Solution:
Net-Audio: This module is not supported on iPadOS at this time due to URL-based loading limitations in PlugData. Check the PlugData issue tracker for progress and any suggested workarounds.
Dynagran: Dynagran is not compatible with current iPadOS PlugData builds. Use Dynagran on desktop systems where required libraries are available and watch the repository or PlugData issue tracker for updates.
PlugData Issue Tracker: Check the PlugData repository for issues and updates regarding Net-Audio, URL support and other iPad runtime features.
Still Have Questions?
Check the main README.md for general usage instructions
Review the HTML documentation in the html-guide folder
Open an issue on the GitHub repository with details about your problem
Remember: The dependency warnings are not errors — they're informational messages about optional features. Envion's core functionality is fully operational on iPadOS without any additional library installations. If a module isn't working (for example Net-Audio or Dynagran), it's due to platform/runtime limitations rather than a problem with the patch itself; please check the PlugData issue tracker for progress on these features.
For years, I explored different systems for handling envelopes dynamically — starting with software like Composer Desktop Project, and later with hardware generators such as Zadar in the Eurorack domain.
Envion - Plugdata version
I would like to emphasize how fascinating the world of envelope dynamics is, and how envelopes can imprint transformative tonal characteristics onto sounds. Out of this research, I developed Envion as a kind of gesture generator.
I soon realized that the most flexible way to manage thousands of segments was to use plain-text databases containing the necessary information. From there, I created the Dynatext system.
At the moment, I am working on formatting textual data from external APIs. In this way, Envion could become a powerful tool for generating thousands of random articulations not only generated from local lists but also from the variable numbers of online APIs.
For example, by drawing on stock market data, weather information, or NASA’s extensive library of APIs — which are incredibly rich and fascinating. Even Co-Star, the app that calculates natal charts, makes wide use of them.
The system is designed for musique concrète/acousmatic music, sound design, and non‑metric writing.
What an Envelope-Driven System Can Do
To grasp, in simple terms, what a system that generates thousands of envelopes can achieve, consider this practical example:
In the video below, we start from a very short single sample (a few milliseconds — in this case, a percussive hit). Through the generation of gestural trajectories, that tiny fragment is multiplied into thousands of variants.
It follows that a single sample in Envion never sounds the same:
with each trigger, both time-stretch and temporal shape change,
turning the sample into thousands of sonic variations instead of a static file.
This happens because at each sound receives not only an envelope — which can be quite complex, with multiple stages — but also a stretch factor that remodels the source material, forcing it to adapt to a new time domain. If you open a file in the /data folder, you’ll notice that many parameter strings contain numerous successive stages.
In this sense, the term algorithmic drum machine is appropriate. That said, time can be further deformed, both through manual stretching and through procedural processes.
Video
A single simple sample creates an almost infinite succession of events. Watch: Algo LPG Drums.
How to Read a Triple (amp – dur – offset)
In the example patch, the message box contains a long list of numbers.
[list split 3] breaks each sequence into three values:
Amplitude (target value, e.g., 1 or 0.2)
Duration (in ms)
Offset (start time in ms)
These are sent to vline~, which builds the temporal trajectory.
Timeline of the Example List
1 50 0 → start at 0, ramp to 1 in 50ms → end = 50
0.2 200 50 → start at 50, ramp to 0.2 in 200ms → end = 250
0.8 100 250 → start at 250, ramp to 0.8 in 100ms → end = 350
0 20 350 → start at 350, ramp to 0 in 20ms → end = 370
1 10 370 → start at 370, ramp to 1 in 10ms → end = 380
0 50 380 → start at 380, ramp to 0 in 50ms → end = 430
1 10 430 → start at 430, ramp to 1 in 10ms → end = 440
0 50 440 → start at 440, ramp to 0 in 50ms → end = 490
1 10 490 → start at 490, ramp to 1 in 10ms → end = 500
0 50 500 → start at 500, ramp to 0 in 50ms → end = 550
In practice, vline~ reads the sequence as a multi-stage envelope,
where each segment begins from the final value of the previous one. In the provided patch,
the envelope output multiplies the oscillator, shaping the sound exactly according to the list.
Try it Yourself
Inside the Envion directory you’ll find a patch called terna-sample.pd.
Open it and try changing the content of the list:
pick a file from /data
copy and paste one of the envelope strings into the message box
listen to the result
To be more exhaustive, further down I also explain in greater detail the concept of Triplets and how they are then handled by the algorithm.
This small exercise will help you better understand how the triple-based system
works and how each gesture is constructed from amplitude, duration, and offset values.
Key idea
Instead of “playing” files, Envion writes trajectories on them through numeric envelopes (dynatext). This enables hyper‑articulated hits, slow morphs, irregular internal delays, and pseudo‑organic behaviors.
At its core, Envion adds an algorithmic layer that keeps the envelope and the sample tightly coupled, preserving coherence while enabling complex, generative transformations.
Inside the repository there is also a version tailored for PlugData. It’s worth noting that this version is significantly more performant: unlike Pd-vanilla, where the audio and GUI share the same thread, PlugData (built on JUCE) separates the audio engine from the graphical interface. This reduces overhead, prevents dropouts when interacting with the patch, and makes real-time processing smoother. The JUCE-based architecture also improves GUI responsiveness, event handling, and CPU scheduling, resulting in noticeably faster and more stable performance, especially on older machines.
In Pure Data, go to Help → Find Externals… (opens Deken).
Search and install each library: cyclone, ggee, ceammc, else, simplex, audiolab.
If prompted for a location, install to your user externals folder (e.g., ~/Documents/Pd/externals).
Restart Pure Data so the new objects are available.
Using Envion
As a procedural environment, in most cases it is sufficient to load a sample, record the output for several minutes, and then select the most interesting portions of the generated audio.
Load a sample into the main buffer.
Enable Random Terna (checkbox below the Dynatext Cloud).
Enable Random List (central checkbox).
Record the output for several minutes.
Select the most significant sections of the recorded audio.
This approach highlights Envion’s nature: it is not about “playing” directly, but about generating emergent sonic material from which fragments can be extracted for composition.
Tip — Keyboard improvisation & safety
Once you toggle KEY ON/OFF, your computer keyboard becomes a live controller.
Know the shortcuts—but then improvise: play the QWERTY like an instrument and react to what Envion generates.
MIDI mapping is of course possible, yet in Envion’s DIY spirit the motto is:
open the laptop and play—no cables, no menus, just gesture.
Emergency stops
If you experience sonic instability (runaway feedback or unpredictable behaviors):
6 — Graceful stop: interrupts input and lets the last trajectory complete.
7 — Hard stop (PANIC): forces vline~ to 0, effectively muting almost any sound immediately.
Keyboard Shortcuts
BACKSPACE TO START
KEY-1
Manual Strike
KEY-2
Original Speed
KEY-3
Stop Original
KEY-4
Retrigger
KEY-5
Random Terna Seq
6
7
PANIC
KEY-5
Random Terna Seq
KEY-6
BREAKDOWN
KEY-7
PANIC
Japanese Wood — Envion test (YouTube Shorts)
Shorts
I loaded the Japanese Wood (Akira Wood) preset inside Envion to soundtrack a scene from Dreams (1990) by Akira Kurosawa — the Kitsune Wedding sequence, where the child wanders through the forest.
All percussion comes from Envion, with a few strikes of hyōshigi taken directly from the film.
Description Here
𝐉𝐚𝐩𝐚𝐧𝐞𝐬𝐞 𝐖𝐨𝐨𝐝 (𝐀𝐤𝐢𝐫𝐚 𝐖𝐨𝐨𝐝) — 𝐄𝐧𝐯𝐢𝐨𝐧 𝐭𝐞𝐬𝐭
I loaded the Japanese Wood (Akira Wood) preset inside Envion to soundtrack a scene from Dreams (1990) by Akira Kurosawa — the Kitsune Wedding sequence, where the child wanders through the forest.
All the percussion comes from Envion, with a few strikes of hyōshigi (Japanese ritual wooden clappers) taken directly from the film.
Note — When loading material with high headroom (low volume), use the array normalization utility (top‑left). For mono material, a Mono → Stereo function (top‑right) mirrors data by copying the left array into the right array.
Ultra‑stereo material is recommended for this kind of application. When loading and mirroring mono material, activate Nuke on alternate channels of the matrix mixer to emphasize differences between left and right arrays, widening the stereo field.
First Steps with included audio materials
To start experimenting, try loading the file:
/audio/env_0001.wav
This reel was created specifically for Envion using my modular synthesizers (Orthogonal Devices ER‑301, Morphagene, and several Low Pass Gates). It was then reamped — played back through speakers and re‑recorded in the room — to capture the original ambient nuances of the space.
The result is a material that embodies a contrast:
Surreal gestures generated by modular synthesis.
Immersed within a real acoustic environment that imprints its own depth and imperfections.
This interplay between the synthetic and the real, between algorithmic articulation and spatial resonance, is at the core of Envion’s aesthetic exploration.
IMPRINTAPE — Convolution Tape Player
IMPRINTAPE is Envion’s new convolution-based tape deck (requires audiolab).
It imprints sound using real IRs from physical cassette recorders, preserving both mechanical colour and temporal behaviour.
➤ load live audio or samples
➤ play / stop / rewind / fast-forward
➤ real transport emulation
➤ controllable tape hiss
➤ wow & flutter modulation
Routing (Envion Integration)
IMPRINTAPE follows Envion’s global routing matrix:
it can be sent to the Main Reverb, Spatial Panner, and Tape Echo,
or kept dry on flat channels.
Included Tape IRs
➤ IR-Dual-CC3600-1 — warm / dark / gentle
➤ IR-Dual-CC3600-2 — grain + soft compression
➤ IR-Dual-CC3600-3 — noisy / vintage highs
➤ IR-Dual-CC3600-4 — dusty, rolled-off top end
➤ Panasonic-269-1 — hi-fi, stable lows
➤ Panasonic-269-2 — stronger saturation / colour
Why this module matters
Unlike Envion’s envelope-driven modules, IMPRINTAPE is continuous —
it is not tied to Dynatext articulation and can run independently as a tape-bed or
as a foreground texture:
➤ background textures
➤ evolving atmospheres
➤ continuous tape presence
➤ organic colour before spatialisation
Video
Demonstration of the IMPRINTAPE module inside Envion.
Real convolution · tape imprint · continuous layer · full routing
Imprintape — Assets & Routing
Inside the /audio/___tape-audio folder you’ll find a couple of
soundscape and ambience files you can load right away.
Imprintape runs upstream of Envion, so you can also use the module on its own — just
remember to route it to the desired output through the matrix routing.
Flat — dry output, no additional processing.
Nuke — distortion stage.
Fallout — reverb bus.
Pan Mix — spatial mixer / panner.
Tape Echo — tape-style delay.
Each convolution type in Imprintape allows the incoming signal from Envion to merge
seamlessly with the loaded tape material. Because every impulse response carries its
own tonal fingerprint, some IRs emphasize low frequencies while others naturally roll
them off — exactly as it happens on real cassette decks and tape machines.
Convolution is a uniquely powerful sound-design tool: instead of layering an effect,
it re-embeds the sound into a new acoustic body, transferring the physical
behaviour, saturation profile and spectral boundaries of the captured device.
I am also working on adding amplifier and cabinet convolution sets, extending the same
physical “body-imprint” approach beyond tape. The whole Envion ecosystem comes from
my own needs as a composer and sound designer, developed over years of searching for
a tool that behaved like this.
Dynagran - Dynamic Granulator
Dynagran is the granular module of Envion 5.0, designed to naturally expand both timbre and structure.
It is not a mere effect: it’s a language amplifier that plugs into Envion’s flow and operates in continuity with its
envelope-first logic (amplitude, duration, offset).
Why Dynagran
Instant activation at any point in the flow: injects granular material into existing articulations. Aligned with Dynatext: works within Envion’s envelope/time system, making sound more flexible, layered, and responsive. Microsound focus: Envion is conceived for microsound; time is handled at a microscopic scale where structure emerges from detail.
Stereo handling and spatial depth
Dynagran runs in dual stereo, reading two distinct arrays (left and right).
A slight offset between the read indices widens the stereo field and introduces micro-temporal variations, yielding a vivid, three-dimensional spatial impression.
This approach works particularly well with mono material, as the offset naturally generates small time differences between left and right channels, creating a wider and more organic stereo image.
Dynagran module interface in Envion 5.0
Stretch & suspension
With careful stretch fine-tuning you can obtain drone-like, suspended textures.
Since Envion has an inherently rhythmic/ballistic nature, this capability can be overlooked at first:
slowing down and stretching the material reveals a timbral continuity that lives between detail and mass.
There are some peculiarities in Dynagran that make it more than just a simple granular processor. Since it allows the grain duration to be exponentially extended up to 4 seconds, the module becomes a kind of unusual articulator. By activating the LFO parameters and increasing the grain time stretch knob, the playback duration of each grain can produce truly unexpected results.
The whole philosophy of Envion is based on this logic — one that values micro fine-tuning and patience. Those who approach this instrument too quickly or impatiently will mostly end up generating material of poor quality.
Grain envelopes: change the curve, change the matter
Each grain is articulated by a selectable envelope. Changing the curve does more than reshape amplitude over time:
it transforms attack softness, perceived density, and how grains fuse together.
Linear — essential and transparent; minimal transitions, crisp definition.
Gaussian — energy centered in the middle; organic, fluid textures.
Triangular — sharper articulation; great for micro-percussive/glitchy behavior.
Double-Hump — dual internal pulse; motion and micro-rhythm within a single grain.
Asymmetric (skewed) — emphasizes attack or decay; directional gestures.
Exponential / Logistic — rapid rises/falls; brighter or “spectral” impressions.
This flexibility lets Dynagran shift seamlessly from carved microsound to expansive drones.
Fine control: dedicated LFOs
Every Dynagran parameter features its own dedicated LFO for automation.
Tip: work with small variations. Envion is designed to respond to microscopic changes — complexity and instability emerge from precision, not from excess.
Parameters and dedicated LFOs
Each primary parameter has an autonomous LFO, enabling dynamic internal automation consistent with Envion’s non-deterministic, cybernetic behavior.
GRAIN-START
Sets the read position inside the buffer. LFO/START enables microscopic temporal shifts and continuous fragmentation.
GRAIN-DUR
Controls grain duration. With LFO/DURATION you can traverse from short, pointillistic events to suspended, drone-like structures.
Defines density via the amount of overlap between grains. LFO/OVERLAP alternates thinning and thickening phases, making the granular stream breathe.
GRAIN-ENV
Selects the envelope curve for each grain (linear, cosine, gaussian, triangular, double-hump, asymmetric, exponential/logistic).
Changing the curve reshapes attack perception, fusion between grains, perceived density, and stereo depth.
Dynagran — Delay-first routing
The signal coming from Dynagran is not routed into the main reverb by default, but into a
dual echo delay, which also features a post-reverb function
applied to the tails. Of course, the signal can also be completely dry or be routed into the main reverb, and it is active across all six main matrix slots. I made this choice because, in my opinion, granular processors
tend to rely excessively on reverb, which often results in a
standardized and homogenized granular sound. It’s easy to make everything
seem spatially rich just by adding reverb, but that approach makes all reverberated
textures sound almost the same.
By contrast, using delay enhances the separation and articulation of grains,
while the subtle post-reverb applied within the delay tails provides depth without compromising
the individuality of each grain.
Take some time to familiarize yourself with Envion’s routing, which in my opinion remains the most complex part to grasp, as it interacts in multiple ways.
Tip: always try small modulations. Envion thrives on minimal deltas; life emerges from detail.
With the new release of Envion v4.0 several important updates have been introduced.
In addition to the dual-matrix echo delay, there is now a dedicated
LFO for stretch automation, allowing you to modulate the stretch factor from
1% up to 40%.
Keep in mind: the higher the stretch factor, the more both the sound and its envelope
will expand proportionally. To avoid unwanted artifacts, a scaling limit at 40% has
been implemented — this prevents entering ranges where you’re more likely to hear
low-frequency clicks instead of musical articulations.
The LFO can be enabled or disabled via a dedicated spigot, and of course
you’re free to tweak both the LFO speed and the stretch percentage
to shape the behavior in real time.
When you load a preset, you can always return to the original stretch
by simply re-triggering the preset. If you’re experimenting with your own material
and you find a moment that sounds inspiring, make sure to note the stretch value
at that point.
For convenience, next to the LFO you’ll find a float reminder: whenever you
load or change a preset, this number will display the current stretch value, so you can
keep track at a glance.
Very important note: if you are using short percussive samples, start with stretch
factors between 1% and 7–8%, and from there decide whether to lengthen
or shorten. If you start from higher stretch values, you will only get a sound lasting
a few milliseconds that is extremely elongated or even silence. This can of course be an intentional
effect, but not always will the spectral characteristics of a short sound remain
convincing under extreme time-stretching.
💡Tip
When loading a very short sample (such as a percussive sound), adjust the stretch factor manually (use the vertical slider, not the horizontal auto-stretch). Setting it to the minimum ensures that the envelope perfectly matches the duration of the sound, preventing unwanted stretching.
Procedural Randomization Automation
By enabling both checkboxes, Envion activates a procedural randomization process that automatically draws from 19,000 pre‑defined triplets and applies the X factor to each segment of the terna, imposing its own time‑stretch and creating the sonic gesture.
Random Terna: continuously loads text files from the /data folder (each file ≈ 1,000 envelopes/triplets).
Random List: randomly selects one of the 1,000 available lists.
This combines automatic loading and random selection, producing an ever‑changing and potentially infinite stream of events.
The patch may look intimidating at first, but it is intentionally left “alive” (with formulas and functions visible) to encourage exploration. Once you learn the few basic operations (keys 1–5 and the space bar, enabled via a flag), it is often best to record the output to capture unique articulations that are hard to reproduce exactly.
Freeze and Stretch
💡
Freeze a sample in Envion (pseudo-FFT feel)
In this video I show how to “freeze” a sample in ENVION. I used an Amen Break as an example: by manually adjusting a few parameters, the final result strongly resembles an FFT transformation, even though the process itself is not technically spectral.
Here’s the interesting part: with the vertical stretch factor slider set to the minimum, the envelope is forced to perfectly match the duration of the sound, avoiding unwanted stretching; then, by massively increasing the stretch factor, the sample progressively loses its rhythmic articulation and turns into a suspended sound mass. During this stretching phase, it’s normal to hear some glitches, since the factor is forcing the shape of the sound by stretching or compressing it. Once you find the sweet spot, the sound remains suspended and frozen.
Adding some reverb enhances the impression of a “frozen” texture. The outcome is a kind of sonic illusion: there’s no actual FFT analysis happening, but the resulting aesthetic easily evokes a spectral transformation. It’s basically a “wannabe FFT”: a freeze effect achieved through different means, yet still capable of delivering a similar sensation.
The Nuke module processes the left and right channels with slight differences in the filter and clipping stages. These micro‑variations introduce phase shifts and asymmetries between L and R, resulting in:
Stereo widening: L and R are no longer identical, creating a broader image.
Perceptual instability: small discrepancies cause a lively, shifting space.
Enhanced aggression: distortion artifacts differ per channel, yielding a wider, noisier stereo field.
Distortion/overload utility snapshot
This design makes Nuke not only a distortion stage but also a stereo expander through destruction. The contrast between similar but non‑identical processing of L and R gives the module strong spatial depth.
Stereo: L/R channels with slightly different times create a wider field.
Feedback: controls the number of repeats, from subtle to regenerating.
Flutter: small random variations of delay time, making it more “alive” and unstable.
Post-Reverb: reverb applied only to the echo tails, adding depth.
Sends: send amount to Echo-L / Echo-R from the mixer to decide how much signal enters.
The two synthesized sounds (demo on the right)
Filtered burst: a short envelope (line~) multiplies noise~
inside a bp~ (band-pass). Result: sharp, bright hits.
Grainy tone: noise~ through bp~ with variable frequency
(MIDI scale → mtof), fast envelope. Result: more “tonal” accents.
Together, the two sounds fill the stereo space: the Echo’s micro-shifts create width and motion.
Project structure
Envion_v3.6.pd → main patch
audio/ → test samples and audio files
data/ → data terna and presets for slicing/algorithms
html-guide/ → guides and documentation (HTML/CSS)
Dynatext Cloud Sequencer
The concept of Terne
One of the central elements of Envion is the use of terne (triplets of numerical values). Each terna defines the behavior of a sound fragment through three main parameters:
Duration – relative or absolute time of the event (in ms or scaling factor).
Amplitude – the signal level, which can be constant or shaped by an envelope.
Offset / Position – the reading point or starting position of the fragment within the sample.
Examples of terne
0.452 80 0 ; → 452 ms duration, amplitude 80, offset at start of sample
0.210 45 600 ; → 210 ms duration, amplitude 45, offset 600 ms into the sample
0.879 100 1280 ; → 879 ms duration, full amplitude, offset 1280 ms
What are Dynatext?
Dynatext are the true databases of Envion: they are not “small” files, but large archives containing up to 1000 lines each. Every line corresponds to a complete trajectory, described through a numerical triplet (amplitude, time, offset), which is interpreted by the engine to drive envelopes.
These files, stored in the /data folder, form a vast repertoire of complex gestures ready to be activated, combined, and transformed. By exploring the text files, you can easily understand how they are structured and, if you wish, create your own — although the existing library already covers a wide range of sonic behaviors.
Why I use vline~ instead of line~
line~ only accepts a target and a time → simple, linear ramps.
vline~ accepts an entire sequence of concatenated triplets (value, duration, delay), enabling complex articulations: micro-curves, pauses, multiple attacks, temporal bounces.
Instead of mere linear ramps, Envion works with fully-fledged dynamic phrases, richer and more expressive.
Random List and Random Terna
The system takes on an even more non-deterministic behavior when the two randomization checkboxes are enabled:
Random List → randomly selects one of the 17 Dynatext files in /data.
Random Terna → within the chosen file, randomly picks one of the 1000 lines.
This happens simultaneously: Envion randomly chooses both the file and the line inside it, yielding a very high degree of chance and variability. Each activation can produce a completely different sonic behavior, even with the same source material.
The role of Stretch
The key control is the Stretch parameter, which adapts the trajectories to the time domain of the audio material (using a term familiar to Max/MSP users). By adjusting Stretch, Dynatext trajectories are compressed or expanded in time:
Low values → fast, percussive, almost microscopic gestures.
High values → slow, broad, dramatic evolutions.
In summary
Large archives (1000 lines × 17 files)
Multi-level randomization (file + line)
Fine time-domain control via Stretch
Together, these elements make Dynatext not just predefined envelopes, but a true generative machine of dynamic articulations, capable of endlessly surprising outcomes.
Semantic Class – List Validation and Categorization
The patch duration_flag_800.pd implements a basic semantic check for incoming lists (vline‑style). It ensures structural validity and assigns each list to a category before it is passed on.
Step‑by‑step logic
Input (inlet) A list in vline~ format enters the patch (usually a triplet: duration – amplitude – offset).
Length check (list length) — at least 3 elements; otherwise flagged as invalid.
Splitting and unpacking — extract the first three values (unpack f f f), first is duration.
Duration test (moses 500) — < 500 ms → percussive; ≥ 500 ms → hybrid.
Routing — invalid lists discarded; valid ones tagged and forwarded.
This acts as a semantic filter: it checks structural validity, then classifies by duration so Envion can route lists by temporal behavior.
Quick Start
Load a list from Dynatext Cloud (or select a local .txt in data/).
Browse a sample (WAV) and assign it as the playback buffer.
Turn on DSP and explore.
Use the manual triggers and sliders to test sequences.
Adjust the stretch factor to compress/expand time.
Try the ready‑made presets (bottom area).
Timebase & $0-factor
The timebase module retrieves the buffer duration (samples → milliseconds), exposes it as $0‑durata, and calculates $0‑factor for the global stretch of envelopes.
// from samples to milliseconds (44.1 kHz)
expr round((($f1 * 1000.) / 44100) * 100) / 100
$0‑factor applies to times of each segment.
Not mandatory when using terne as parameter modulations (e.g., FM resonance, filter index, temporal stretching).
Original‑speed playback:0, <array_size> <durata_ms> — scans the entire buffer in durata_ms at constant speed.
Workflow
Load a sample → openpanel ~ soundfiler into sampletabL/R. If mono, use Mono→Stereo (copy L→R).
Load an envelope library → text define/get. Each line = one terna. Select or randomize.
Play → autoplay or manual keys: KEY1–4 (strike, original‑speed, stop, retrigger).
Record → from AUDIO RECORDER block.
Lists of Terne (1000 envelopes each — total 19k)
default.txt — neutral baseline.
perc.txt — fast attacks, short decays.
vline_perc_1/2 · vline_ultra_perc_3.txt — percussive variants from soft to extreme.
Line 2 = 2‑segment envelope. Avoid all‑zero lines (silence).
Autoplay & Manual Player
Autoplay: a metro drives text get; last strike duration can trigger next step (END listener).
Manual:
KEY1 = strike
KEY2 = original‑speed
KEY3 = stop
KEY4 = retrigger
Smart concatenation: internal delays in terne allow irregular patterns without reprogramming the metro.
Playback Engine
tabread4~ sampletabL/R — 4‑point interpolation, indexed by vline~
*~ / pow~ — amplitude control + shaping
snake~ — stereo/multichannel routing
safety — clip~ headroom
Note: tabread4~ never stops until index=0 or out of buffer. For immediate stop: send clear/stop to vline~, or drop amp to 0.
Quick Play & Algorithmic Drum Machine
Manual Strike Mode
Load any list from the Dynatext Cloud.
Assign a sample (short percussive ones work best).
Use KEY‑1 (Manual Strike) to trigger individual gestures. Each line becomes a distinct hit.
This simple workflow turns Envion into an algorithmic drum machine: browse different lists and strike manually to generate unique percussive articulations and irregular rhythms.
Tips & Tricks
Pair short samples (kicks, snares, metallic hits) with percussive lists (perc.txt, random_delayed_perc.txt).
Try drone/long lists on short samples for stutters and stretched hits.
Map envelopes to parameter modulation (filters, FM index) instead of playback.
Alternate between manual strike and autoplay to balance control and emergence.
For drum‑like grooves, use Random List + Random Terna and keep sample ≤ 500 ms.