Advanced mapping for M-Audio Xponent

I have created an alternate mapping for the M-Audio Xponent, and before submitting a pull request to get it into future versions of Mixxx, I’d like to solicit some feedback. The Xponent seems like a great match for Mixxx in my mind. I has a ton of control surfaces beyond the basics, and I wanted to take advantage of them.

I took the stock 2.0 mapping and made some pretty drastic changes, mostly additions. I’ll just start from the top of the controller and work my way down, and hopefully I don’t leave anything out.

You may want to refer to the diagram in the Xponent documentation for reference (https://www.manualslib.com/manual/569175/M-Audio-Torq-Xponent.html?page=7#manual)
I’ll refer to the diagram numbers in the list below.

10, 11, 12) The PFL (headphone) and scratch-enable buttons are stock, as are the jog wheels.

  1. The Big-X button is mapped to the Brake effect. If you let go before the track comes to a complete stop, it will continue playing. If you hold it until the track stops, it stays stopped.

  2. The Big Minus button is reverse-play, but it’s momentary rather than a toggle like the standard mapping.

  3. The nudge buttons are the same as usual, but mapped in the opposite direction from the Mixxx UI to be more mnemonic in my mind. Pressing the left nudge speeds the track UP, “nudging” it further to the left if you’re watching the beatgrid.

24, 25) The deck knobs and buttons perform different duties on the left and right sides. The left side controls the samplers, with the knobs controlling the volume, and the buttons firing the samples. The volume knobs are all soft-takeover-enabled.
The right side controls handle the effects, and need a little explanation. Pressing the buttons changes which effect (1-4) currently has the focus, and will light up accordingly. Pressing the button again will toggle that effect on and off. The knobs control the parameters of whichever effect currently has the focus. The first three knobs will correspond to the first three parameters of the effect, and the fourth knob will always control the wet/dry mix. Most of Mixxx’s effects only have two or three parameters, so this works well. The Echo effect has four parameters so there is unfortunately no knob for the PingPong parameter. Due to a limitation in Mixxx 2.0, the parameter knobs are not soft-takeover, so be careful. Hopefully they’ll work better in 2.1. holding shift (15) while pressing the buttons will cycle which effect is in that slot.

  1. The row of LEDs below the deck buttons shows the progress through the song and will start to flash at 75%. This probably won’t align with Mixxx’s end of track warning, which default to 30 seconds before the end of the track. I may revisit this one to make them match, but I didn’t want to over-burden the script with math just yet. I’ll try to fix that up next.

30, 35, 37) Fast-forward/Rewind, Cue, and Play are what you’d expect, nothing unusual here.

  1. Buttons 1-5 are hotcues. Press them to set or play a hotcue. Holding shift while pressing 1-5 will clear that hotcue. Pressing a hotcue while the track is playing will jump to that hotcue and continue playing. Pressing a hotcue while the track is stopped will play the hotcue but stop when the button is released.
    The |< and >| buttons will shift the beatgrid on that deck to the left or right so you can make minor adjustments on the fly. Holding shit while pressing either button will align the beatgrid to the current position.
    The padlock button toggles the keylock on that deck. Holding shift while pressing the lock button will toggle “quantize” for that deck.
    The small + and - buttons increase or decrease the track speed accordingly.

  2. The deck volume sliders do what you expect, but are soft-takeover-enabled in this mapping. If you don’t use them, you can safely “stow” them at either extreme so that you don’t accidentally upset them.

  3. The looping section is fully functional. 1,2,4, and 8 will set loops of 1,2,4, or 8 beats. Holding shift while pressing one of them will do a rolling loop of 1, 1/2, 1/4, or 1/8th beat, resuming playback where it would have been without the loop when they are released. The loop enable, begin, and end buttons do their normal thing.

Everything in the center EQ section is normal, with EQ band kills doing what you’d expect.

  1. The Sync buttons behave as usual, but flash to the beat of the song playing on that side.

  2. Punch-in momentarily centers the cross-fader. If the cross-fader is all the way to the left, then the right punch-in will center it until released and vice versa.

  3. The cross-fader is soft-takeover-enabled in this mapping.

Other notes: This mapping implements the “secret handshake” required to get the lights to work, so you don’t need to hold down anything when powering up the controller, but if you are using M-Audio’s ASIO drivers under windows, the lights still won’t work. I don’t know what’s up with that, but I explained it on the controller’s wiki page (http://www.mixxx.org/wiki/doku.php/m-audio_xponent)

If you have an Xponent, please try this out and give me your feedback.
Just drop these in your user controller mapping folder (http://www.mixxx.org/wiki/doku.php/controller_mapping_file_locations#user_controller_mapping_folder), and Mixxx will display it as M-Audio Xponent Alternate.
M-Audio-Xponent-Advanced-scripts.js (32 KB)
M-Audio Xponent (Advanced).midi.xml (121 KB)

I got the 30-second warning to work, but I thought I’d see what anyone else thinks of the approach. The calls to update the play position send a percentage value, so rather than doing the math on each call, which seems like it would drag things down unnecessarily, I decided to do the math just once whenever a track is loaded.

The guts look like this:

// Array to hold the track warning percentage (1-based... deal with it)
warnAt = [];

// React to track load
engine.connectControl("[Channel1]", "duration", "MaudioXponent.trackLoaded");
engine.connectControl("[Channel2]", "duration", "MaudioXponent.trackLoaded");

// Position listener (from original 2.0 mapping, I guess it doesn't tell you which deck, so it's copy/pasted twice)
MaudioXponent.playPositionMeter1 = function(value) {
    print("Deck 1 position = " + value + ", warnAt[1] = " + warnAt[1]);
    if (value >= warnAt[1]) {
	    MaudioXponent.flashdur1++;;
	    if (MaudioXponent.flashdur1 == MaudioXponent.flashprogress) {
	        midi.sendShortMsg(0xB3,0x14,0x00);
	    }
	    if (MaudioXponent.flashdur1 >= MaudioXponent.flashprogress*2) {
	        midi.sendShortMsg(0xB3,0x14,MaudioXponent.convert(value));
	        MaudioXponent.flashdur1 = 0;
	    }
    } else {
	    midi.sendShortMsg(0xB3,0x14,MaudioXponent.convert(value));
    }
};

// Math to figure out what percentage is 30 seconds from the end
MaudioXponent.trackLoaded = function(duration, group) {
    var currentDeck = parseInt(group.substring(8)); // We get the string [Channel1] here rather than an integer value... oh well
    var duration = engine.getValue(group, "duration");
    warnAt[currentDeck] = (duration - 30) / parseFloat(duration); // parseFloat to force floating point math
};

The only things I’d change are:

  1. I’d rather the call from duration gave me an integer rather than a string ("[Channel1]") but that’s out of my control.
  2. I’d like to pull the 30-second number from the engine itself, but I couldn’t find its name in the wiki. I know you can configure this, so I’d like to automatically account for user preferences.
  3. I could get more precise by using the number of samples in the track, rather than the duration in seconds but that’s probably getting too nit-picky. It works.

The best solution would be to create a new read-only ControlObject in C++ that indicates when the track is nearing the end. Then it would be trivially easy to connect that to a JS callback that lights an LED without any fancy tricks in JavaScript. Considering that Mixxx already calculates that somewhere, it should not be difficult to toggle a ControlObject when the waveform starts blinking. Do you have any experience with C++? You can find an introduction to the ControlObject system on the wiki?

The only C++ I’ve done is years (like decades) ago, but it’s not something I’m afraid of. I work in the C# world, so syntax is not a problem. It’s probably more a matter of setting up a workable build environment. I think I saw somewhere that Visual Studio (Community) has everything we need to build Mixxx, and I have that on my other machine already.

A simple boolean ControlObject to control whether or not to start flashing would simplify things conceptually, but I think the method doing the flashing would have to remain largely as it is. I’m assuming that the callback that’s connected to playPositionMeterX in the above code has a defined frequency that it gets called at. I’m certainly not getting a callback on each sample played. The original code implements a simple counter and toggles the lights on and off every X number of calls (I think it’s set to 8), so maybe it’s every 1/10th of a second or something like that. That part would have to stay, although I’m kind of tempted to roll this in with the reaction to beat_active and have the progress meters flash in time to the beat rather than at some arbitrary frequency… maybe if I’m bored next weekend.

I’ll look at building Mixxx from source again at some point, but I have enough other things going on at the moment to get my head into a larger codebase like this. Single controller mappings fit within my available mindwidth.

I know there are things I’d love to look at. For instance, many controller mappings will define parameter values up at the top to enable/disable features. It would be great if there were a way to expose these to Mixxx itself so that they could be represented in the preferences dialog rather than having to open up the .js file to change them. For instance, I made my sync buttons flash to the beat. That might be irritating to others though. The ability to change the value from the preferences dialog would allow individual users to make their own choice. Similarly, my backwards nudge buttons make more sense to me, but might throw others off. A preference dialog would alleviate that problem.

What I might do next is gather and document up some of the preferences at the top of the script file, maybe offering multiple choices for certain buttons. Maybe you’d like the big minus button to spinback or a latching “play backwards” rather than the momentary version I’ve chosen. Maybe you like the |<, >|, lock, + and - buttons to navigate the library. A couple toggles at the top should make the mapping much easier to customize.

If you have time for one question, I’m a little confused by what I’ll call “scoping rules” around script-level variables. Most of the state variables in the original script start with MaudioXponent. For instance “MaudioXponent.flashdur1”, which is defined near the top of the script. Nothing ever changes this value, so it’s basically a constant. I tried to define my warnAt array this way, but it didn’t work right. Within the trackLoaded function, I could change the value, and immediately read the value out again to verify that it was set, but in the playPositionMeter1 function the value would always appear to be zero, as if it were a completely separate variable. If I removed the “MaudioXponent.” from the beginning so that the variable is named simply “warnAt”, then the value set in one function is visible from the other. I’d like to scope my variable to be defined inside the MaudioXponent object so that there can be no collisions with other loaded scripts, but it just doesn’t seem to work for me when I try it. Javascript is not my first language, so I’m sure it’s something simple, but it’s not working the way I would think it should. What am I missing here?

We use the Visual Studio C++ compiler for the Mixxx build server and AppVeyor continuous integration, but I do not know if anyone uses the Visual Studio IDE to work on Mixxx. There are pages on the wiki with tips to set up Eclipse and KDevelop to work on Mixxx. Personally I use KDevelop. It recently got ported to Windows, but I have not tried it on Windows.

As for setting up a build environment, a lot of work has been done lately to make it easier to set up on Windows. I have not done it myself, so I’m not sure how easy it is, but there are a few developers who have working Windows build environments now.

What about a ControlObject that toggles between 0 and 1 in sync with the flashing of the waveform on screen? This would be similar to the cue_indicator and play_indicator COs; no logic is needed in controller mappings, just toggle the LED when the value of the CO changes.

Totally understandable. Mixxx is a lot of code and it takes time to learn your way around. There is some documentation to help new developers get started, but it is far from complete.

Yeah, this is definitely something Mixxx should have. Owen started a design document and proof of concept for this, but I think it needs more discussion and planning to come up with a comprehensive solution. Hopefully someone will take it up for Mixxx 2.2.

JavaScript has some weird scoping rules. Variable declarations are hoisted to the top of their scope. Scope is confined to functions in JavaScript (at least in the ancient JS interpreter Mixxx still uses; modern JavaScript uses let instead of var for block scoping).

It can be misleading to think of that as just a variable; it is a property of the MaudioXponent object.

I do not know what went wrong without seeing the code you tried. It’s possible there was just a silly typo messing something up.

Indeed, each script loaded runs in one execution context for that controller. There are some plans for improving this situation and to work like other JavaScript environments.

Yeah, that’s how I was thinking of them, as properties on the MaudioXponent object. They just didn’t seem to work that way when I tried it. I can try again, in case it was a stupid typo. It wouldn’t be the first time.

I figured that was the purpose behind the top-level object being named for the controller rather than just being called “controller”. I’ve never tried running with multiple controllers attached, but I know that Mixxx supports that.

I did some refactoring last night, and in the end I got everything working, but I have a question about what’s on the horizon for Mixxx.

I went ahead and made the progress bars flash when they reach 30 seconds from the end (the 30 is still hard-coded, unfortunately), but the momentary flash on each beat was not attractive, so I made a “beatState” array that toggles on each beat_active where the value is 1. Instead of “flash flash flash flash” for four beats, you now get “on off on off”. It looks really good, but you can pause with the bar on or off. I’m going to add another condition so that when the deck is stopped, you always see the bar. I tried tying this same behavior to the pulsing Sync buttons I implemented earlier and that’s where I noticed a problem.

So far, Mixxx has no concept of WHICH beat it is (1 2 3 4), so it was very possible to have songs on both decks that are perfectly in sync, but the Sync buttons were pulsing 180 degrees out of phase. It was distracting enough that I put the Sync buttons back to simple flashes per beat. I don’t think I’ll ever be in a situation where I have two tracks playing, both within 30 seconds of the end, so the progress bars flashing out of phase shouldn’t really happen in the real world. What I’d like to know is whether there are any plans for more advanced beatgrid intelligence in the future. Will the beatgrid know which beat is #1 anytime soon? When that happens, I’ll modify my script so that the bars turn on with the odd beats (1 & 3) and off with the even ones (2 & 4), and maybe revisit on/off Sync buttons.

Cool. Please update the files in the first post.

No one is planning on implementing phase detection any time soon, unless you want to make that happen. There is an old Launchpad ticket for this where there is a brief outline of how much work would be involved and some hints on what tools could be used.

I’ve updated the mapping files in the original post with an evening’s worth of refactoring.

I replaced some mappings where the target function was left to do a lot of string parsing or math to determine the right channel or control with function mappings that provide the correct answer to begin with. It seems to me like that ought to make things run more efficiently. One example is the hotcue led callbacks. The arguments to that callback are not in the normal order, and so “control” ended up holding the complete group name etc. The function now takes just three parameters with names that match their purpose, so the mapping looks like this:

engine.connectControl("[Channel1]", "hotcue_1_enabled", function(value, group) { MaudioXponent.onHotCue(0, 0, value); });

Where the first zero is the channel, and the second one is the hotcue number. The function looks like this:

MaudioXponent.onHotCue = function(channel, cue, value) {
    midi.sendShortMsg(MaudioXponent.on + channel, MaudioXponent.leds.cue1 + cue, value);
}

If doing it this way is somehow LESS efficient, then I’d like to know. For something like the hotcue states toggling, which doesn’t happen many times a second, I wouldn’t expect there to be an appreciable difference anyway, but there are other places where I’ve inlined a function call in order to better sort out and name the parameters, or to spare the function from having to parse strings to get the values it needs.

Also, I’ve renamed the scripts to “Advanced” instead of “Alternate”.

Feedback is appreciated.

Unnecessary string manipulation is not good, but I don’t think it makes a practical difference for controller mappings. Perhaps it might for sensitive controls like jog wheels, but I have no data to back that up. One of the benefits of Components is removing the need for most of the string manipulation that mappings have regularly done before because the strings are calculated infrequently then stored as properties of the Component object.

I guess I’m just looking for reassurance that there’s not some known issue with putting inline functions in engine.connectControl calls. It seems like the ideal place to do some “pre-work”. The basic mapping to a real function sometimes doesn’t give you all the information you need, or gives it to you in a format that’s not immediately useful. Within the connectControl call in the above example, for instance, I already KNOW that we’re talking about Channel 0 (Deck 1), and HotCue 0 (really 1… zero-based), so why not just hardwire that into the call to onHotCue rather than having onHotCue have to figure that all out again.

The original HotCue function was a five-branch switch statement based on string comparisons. There’s just no way that was efficient. In my experience, magic strings are almost always a bad idea. The mapping also contains a lot of… I don’t know what to call them actually. Collections of string/value pairs? It’s what I would use a proper Enum for in a “bigger” language, but using string keys to extract values from an ad-hoc object seems wrong to me.

Rather than

MaudioXponent.leds = { "cue": 0x01, "play": 0x02 ... }

I’d rather see

MaudioXponent.leds = { Cue: 0x01, Play: 0x02, ... }

As I’ve said, Javascript is not my first language, so there may be some standard or idiom here that I’m unaware of, but why would you use strings as keys here?

We’re going pretty deep down the JS rabbit hole here for what I think is premature optimization. If you really want to optimize that, I think you’d have to use an IIFE that returns a function that calls the callback function with the precalculated arguments:

var hotcueNum = 1; // calculate the number however you need here
engine.connectControl("[Channel1]", "hotcue_" + hotcueNum + "_enabled", (function(value) {
    var precaculatedArg1 = "hotcue_" + hotcueNum + "_set";
    var precaculatedArg 2 = "string" + 2;
    return function (value, group) {
        MaudioXponent.onHotCue(precalculatedArg1, precalculatedArg2, value);
    };
})() );

If any of those precalculatedArgs would be “hotcue_1_enabled”, you could just reference the third argument to the callback function within the callback.

That example code is pretty convoluted for something that is commonplace in a mapping script. I think the right way to handle this is how Components does it, by storing information shared between MIDI input and output callbacks as properties of an object that both the input and output functions are also properties of.

I don’t think that whether you reference an object property name with or without quotes is relevant to the performance. An object property name is an object property name. If you want to understand JS better, I recommend a Mozilla Developer Network guide, A re-introduction to JavaScript.

I talked with some front-end devs that live in Javascript all the time, and I guess the two are completely equivalent. The only real differences between quoted and unquoted “properties” are

  1. You can use reserved words if you quote them.
  2. Some JSON serializers want the quotes there (probably not relevant to Mixxx)
  3. Some IDEs prefer no quotes, and will offer better intellisense if you leave them off.

So basically, It’s a wash.
I’m not really trying to optimize unnecessarily, but you can see several people’s hands on this script, and I prefer a measure of consistency. Some of the objects use quoted identifiers and others don’t. I was trying to understand the difference. It seems it’s just a personal style choice in this case. I am trying to simplify and eliminate code duplication, though. Just for my own sanity, and to reduce the size of the mapping to fit in my head better. There are a few dead ends and unused functions in the original mapping, so I’m pruning those as well, especially where they overlap something I’ve added, like flashing progress bars at -30sec instead of a hard-coded 75%. No need to keep the old supporting functions and variables around.

I think I’m just about ready to submit a PR for this one. I’m thinking since it’s aimed at 2.0, I should take the branch from 1.12, although I’m not sure it will make it into the final 2.1 if I do that. I would imagine you’re pulling 1.12 into 2.1 before it’s finalized, right? Then, I figure I’ll make a new branch off of 2.1 and start work on the newer version with proper on-screen focus, and hopefully leveraging the Components library.

I have a process question, though. The wiki says to fully document the controller on the wiki before submitting the PR. Since this is an alternate mapping, I was thinking I’d create and link off to a new page just for that one mapping rather than cramming it all onto the main controller page, especially since it seems there will be three mappings now (Stock, Mixco, Mine). That is unless the Mixco mapping is completely replacing the existing stock mapping. If that’s acceptable, then I’ll put some sort of notice up at the top that this information is preliminary so people don’t wonder why they don’t see this supposed new mapping in their systems. Then, when the next version comes out, I would remove that notice. Does this sound about right?

The 1.12 branch has not been actively developed. We were initially thinking of doing a 2.0.1 release shortly after 2.0, but the old build server died shortly after 2.0 was released so that did not happen. Development has been done on the master branch. I suggest posting the 2.0 compatible mapping here, then branching off of that for 2.1 to make the pull request.

For the documentation, make a new section on the existing wiki page rather than a new page.

Regarding my earlier question about inlining certain function calls in order to reduce work later on… I now have a reason NOT to do that. One word… closures.

As I’ve been re-working this script, I’ve been trying to eliminate repetition and hard-coded values. I’ve implemented a deck array to store various state variables and key values, such as the values for pressed and released (e.g. deck[1].on = 0x90 while deck[2].on = 0x91). This is part of the groundwork for adding 4-deck control to the Xponent. It already has a mode A/B switch on the front that modifies the values of all the controls, so it’s ideal. Where some connectControl calls appeared only twice before, they would now appear four times. For items like the hotcues (there are 5), what used to be 10 lines now becomes 20. Throw in multiple loops, and this started to get out of control, so I thought I’d just make a for loop from 1 to the number of decks, and repeat the same initialization for each line… and that’s when closures stepped in to ruin my day. Without a simple closure-busting trick, I can’t dynamically fill in values based on the loop variable at initialization time because they’ll all pass 5 (the ending state of the loop variable) at runtime, and there IS no deck 5. Anyway, lesson learned. Wiring up the controls with just the name of the function and doing the work there is perfectly acceptable, and means I can write my mapping to work with any arbitrary number of decks.

One thing I noticed that didn’t work as I expected was the num_decks value. I expected it to toggle between two and four depending on the state of the UI, but it keeps saying “4” no matter whether I’m showing four decks or not on-screen. Is this normal? I suppose in the end it doesn’t really affect much. If I initialize decks three and four, and you never use them, then it doesn’t hurt anything.

I think [Master], num_decks only indicates the state of the engine. [Master], show_4decks indicates the state of the UI. But controller mappings should be able to manipulate decks 3 & 4 even if they are not showing on screen.

One of my goals with Components is to make toggling between decks easy to implement. However, the entire script needs to be written using the library for that to work.

I intend to base a new version for 2.1 on the Components framework, but it’s not available in 2.0 which is what I’m using now. I’m also hesitant to experiment with 2.1 on the system I actually use for gigs. It’s delicate enough as it is. I learned years ago to never let Windows update the graphics driver or I would sometimes crash in the middle of a gig. If I left it on the original OEM Nvidia drivers, it would stay up forever. Well now Windows 10 likes to update drivers without asking, and I have had it crash on my once or twice. Never during a gig, but enough that I don’t want to mess with this computer unnecessarily. I’ve managed to roll back the drivers and tell Windows to leave it alone, but I still want to keep my fiddling to a minimum.

For some reason though, I can’t get Mixxx to talk to either of my controllers when I’m running it on my current developer machine, A Surface Pro 4. I don’t know why that is, but it just doesn’t want to talk to my controllers. It has waaaaaay too many things listed as possible controllers, especially when it’s hooked up to the dock, and the left-pane won’t scale, so I have to guess from the first part of the name what the items in the list are, but I’m able to identify the Xponent in the list. Then I pick my script and… nothing happens. No celebratory light flourish, no controls, nothing. I’ll be trying to figure out what’s up with that next, then maybe I’ll try installing the early releases of 2.1 and see what I can do with the mapping.

Those devices you see listed as controllers are probably HID devices. I don’t know why your Xponent is not working with the Surface Pro 4, but I’d suggest looking at the log.

Yes… tons of HID devices. More than you would expect.