Embedding metadata into digital music material, that can then be used by AI to EQ/shape the final sound so as to be aligned with the artist(s) preferences. Must be equipment, environment and listener agnostic.
Hmm... Seems like a very tall order to me. Without getting the artist in question around to my house, how would I know that they 'like' the final result? In the absence of said artist, how could I be sure that what is coming out of my system in my room is 'correct'?
In addition, as an old fart, my hearing is no longer what it was (for example, alas I can no longer hear John Bonham's squeaky bass pedal in Since I've Been Loving You). So the EQ settings that sound good to me, could well be a long way from what was intended or preferred by the artist or indeed the mastering engineer.
However, embedding metadata into digital music and indeed video media that manages ownership, distribution rights, royalty payments etc seems like something that could happen quite easily. Even makes me wonder why it's not already there.