Announcement

Collapse
No announcement yet.

A commentary on speakers and measurements

Collapse
This is a sticky topic.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • A commentary on speakers and measurements

    Greetings people of the Internet and good afternoon. Select measured loudspeaker responses have become something of a casual, standardized yardstick in places, and assumptions about them are being transferred onto real sound. I've had some thoughts about speaker response measurements versus sound rattling around for awhile and thought I'd post them.

    Just tossing a few graphs out there (like we did recently) may imply that they actually do translate, more or less, into our aural impressions of the loudspeakers that made them. I've never found the measurement / speaker connection all that evident and automatic, however.

    The challenges tied up in measurements are real. They're not insignificant.

    There's too much to speaker behavior for us to leap from data to sound. We can develop a reasonable, qualifying view of speaker data, just not as a rigorous interpretation of sound.

    Using data carries with it two seemingly contradictory truths: The first is that there can never be enough. More quality data simply means more reference points for a more nuanced view. The second is that the more of it there is the more variability creeps into the casual assumptions sometimes freighted with it.

    Yet depending on preference, data is anything from a one-stop bible on speaker performance to an irrelevant accessory barely related to listening at all. In my opinion it's neither, but there are complexities to data - some practical, some theoretical, and some logical – to explore if we expect relevance.

    Data type, condition, and acquisition are neither standardized or static. The context I'm addressing is the retail, casual, street use of data and not the bleeding edge in a closed laboratory where context and meaning are vastly different. There is a gulf between the two just as there is a logical contradiction between the complexity of data and drawing a simple conclusion from it. In my experience it's very reluctant to be that convenient.

    I'll try and break questions out into specific areas of its tech, its collection, assumptions about its use, and the fallacies that arise around it. In no specific order, here we go.

    UPDATE July 29, 2020

    Since these posts were written three years ago aspects of the measurement-centric mindset have naturally evolved. A look at the conventional wisdom in places where speaker theory is discussed shows similar reliance on expanding data sets. Many times these circles have come to see what they call "the measurements" as a comprehensive, even complete snapshot of sound.

    But are measured data complete? In some cases "the measurements" refer to just one amplitude response. In others they refer to a cluster of amplitude responses, which is more useful. But just as there's no way to summarize all available data into a predictive tool on real sound, there's still a general parsing to data down to just the amplitude response(s).

    Are they universal and do they speak to every behavior of a fairly complex loudspeaker? They are not complete and as abstracts, they cannot condense reproduced sound down to a handful of graphical records of some speaker behaviors.

    What are the other characteristics of complex loudspeaker behavior, in rough order of importance, not reflected intuitively in available data if they appear there at all?

    Acoustical size. The importance of acoustical size cannot be overstated. Conversely, if data does not first call out the difference between, for example, a large multiway floor speaker's amplitude response and a single 4" driver equalized to copy that response, then the data is virtually meaningless. This applies across all speaker size classes.

    Damping, transient response, stored energy, self-noise, etc. Cumulative Spectral Decay (CSD) is one way to display "hidden" resonant stored energy as a function of time. Just as a bell rings over time, any undamped speaker behaviors, whether mechanical, acoustical, or electrical may also store energy and release it over time.

    Fundamental transfer function, damping, and power behaviors. Multiway speaker crossovers split the audio band into segments and route it to specialized drivers in the speaker. How these functions affect an enormous range of audible behaviors, while obliquely visible in some data, is almost completely unknown to the consumer and casual speaker data fan.

    Distortion, type, and distribution, including by loudness level. No speaker reproduces a signal without distorting to some degree. Amount, type, and location of this distortion is naturally an important consideration.

    Harmonic distribution. Harmonics generated by distortion appear in relationship to a fundamental tone. How they're distributed and to what relationship can be a component of perceived reproduced sound.

    All pass, minimum phase, linear phase, and transient relationships; time offset, group delay, step response, etc. The complexities of the transfers mentioned above are compounded almost infinitely across all speaker design types and examples. How they work is virtually never expanded upon, but how they all inherently affect both sound and data is fundamental.

    Bandwidth. Arguably as important as acoustical size, the bandwidth of the speaker is a strong precursor of listener reaction. However without acoustical size being established, it becomes a fairly ignored aspect of real sound.

    There are many other elements to loudspeaker-reproduced sound.

    Virtually all discussions of speaker data fail to include all data. At the same time no discussion of loudspeaker data can predict the sound of a loudspeaker, which is to say that data is very arguably a design check and is not a post-design predictor of sound, at least to the degree that it will speak to the reaction of a reasonably perceptive listener as s/he is or is not gratified by an original musical performance reproduced by a loudspeaker.

    References to :"the measurements" as if there were a global, comprehensive, and complete data snapshot to predict sound are simple biases.

    Over the time since the first comment above, some speakers regarded for their measured linearity on one measurement system have been shown to be much less attractive in the data from another. This calls a comment posted below about deviations in the data from a single speaker into play. Even the objective data is apparently not absolute. In other cases an over-reliance on very limited amplitude data has created a small class of listeners who have conditioned themselves to hear virtually nothing else in the complex reproduced sound.

    The point of speaker data is to isolate a speaker's behavior and scale its relative degree. It seems much, much less able to thoroughly capture the actual sound of a whole, complex device. It's perfectly acceptable to deviate from one of more classical data expectations in order to fulfill aspects of real sound elsewhere - Chane has done this in places, both consciously and to pursue a more truthful sound.

    Without a thorough facsimile of total sound visible in the data, the data fails to display enough of a comprehensive, even total picture of that sound. The data is a profound and essential component of the loudspeaker design looking outward. It is however nearly as profoundly incapable of proving sound unheard as a predictive tool looking inward on the speaker and its measurable, complex personality from the outside. There is a fundamental difference between primary design data and an after-the-fact attempt to show sound through data.

    What's the solution? As always, it's hearing the speaker, in the space in which it'll be used, over a long enough period of time to allow it to inform the subtleties of perception, and to not bias the experience - which is what this is ultimately all about - with limited preconceptions of what the sound should be according to a limited interpretation of data.

  • #2
    Continued...

    Measurement systems. Any endorsement of speaker data must naturally involve knowing that the data is truly representative, and believing that the data is representative - that it freely translates back and forth into hearing – should naturally involve knowing what type it is and how it was gathered.

    For data to be as casually handy as we may imagine, that data should legibly reflect a lot of complexity from within a standardized framework. Both should be understood and accepted as aurally relevant. For that to occur we have to use the same language everywhere, we have to tie the measured effect to a perceptible cause, and we have to connect the data method to its purpose and utility.

    In reality all of this is open to interpretation. Data is multifaceted, it's somewhat abstract, it's gathered many different ways, it's processed differently, and naturally it has different aims. Some systems operate on a speaker globally, some at the component level, some are acoustical, and some are mechanical or electrical. No one measured response encompasses enough information to reflect a complex system like a speaker.

    Since data systems and acquisition methods can't be universal, their data isn't interchangeable either. Data systems measure with burst noise or steady sine signal, gate the response or include environments, and typically measure and record just one of many behaviors. Arbitrary data processing then alters and tunes the raw output to make it legible. It has to be interpreted before it can be translated and it still doesn't necessarily translate into a speaker's perceptible, apparent sound.

    Speaker data systems speak in different languages and even with different dialects. Without a unified and inherently very complex standard picture, data does not translate well; not into sound and not even into another simple FR function and graphic. Except within a single lab, data stands pretty much alone.

    The narrow utility of the loudness-frequency curve. The most common logged output is the simple one-axis amplitude magnitude or frequency response (FR). It's this information that data advocates probably most often assume reflects speaker performance or quality.

    But a speaker's FR merely represents its frequency-dependent loudness at a point in space or an average across space for a given input level. It contains no evident information about a host of other speaker behaviors. Even if the plotted FR were the perfect representation of all speaker behavior it cannot be, there is no standard for how even this one data set is acquired, post-processed, and charted. There are advisories and attempts to standardize, but it's still a variably-presented slice of a relatively narrow speaker behavior.

    Among other data types are average acoustical power in the speaker's radiating space – involving the speaker's off-axis FR and its polar graphic - acoustical and electrical phase response, time-energy storage and the waterfall plot, impulse and step functions related to the speaker's transient timing, non-linear behavior and dynamic stability spanning a range of loudnesses, distortion and harmonics, impedance magnitude and amplifier interaction, mechanical and thermal systems linearity, and so forth. Most of these behaviors are also dynamic and some are also captured at arbitrary points. Only some are reasonably absolute enough to be thought of that way.

    The simple FR is only as useful as it is a complete snapshot of all speaker outputs and behaviors, and it is not that snapshot at all. Using FR to gauge a speaker may be less viable than isolating just horsepower and torque to gauge a racing car. Racing has stopwatches. Listening does not.

    Comment


    • #3
      Continued...

      Environments. Compounding this fundamental limitation, data invariably includes some degree of the speaker's interaction and involvement with the surrounding environment. Acoustically-inert anechoic environments are rare, while the real world use of speakers actually depends on the psycho acoustical dependence listeners have on real environments - hearing our audio system in anechoic conditions is not only impossible, it is highly unnatural sounding.

      The question we're asked is under what condition shall the data be established, to what extent will environment factor, to what linear or non-linear effect, in what domain, and how shall all this be processed so as to reflect and suit the average user's average environment, if any. It's at about this point in our mental experiment where we realize just how limited and limiting simple, abstract, and typically isolated FR is, at least as a gauge of real sound.

      Drivers and arrays. With our focus still on simple FR, we should know that any single driver will exhibit different behaviors across its operating range and at different angles to the microphone. Each individual driver in a multi-driver speaker has nearly as many measurable behaviors as the complex loudspeaker whole does.

      Now our question is if we should weight all frequencies from a single loudspeaker driver identically. Given how driver outputs change so much over their ranges and in so many ways, how would we do this, or should we assume that a particular FR at one point in space sufficient and representative all output variables?

      Even if we could relate these non-linearities, they're compounded by how multiple drivers interact in multiple domains when profoundly modified by their electronic dividing filters. Consider a series of balls of sound, one from each driver and one above the other arranged along the speaker's baffle. Note that each also varies shape as frequency changes. Even if they all blend uniformly, they'll still dissolve into one another relative to how those shapes change and where the microphone is relative to them as a group. Move the microphone, change the measured result.

      It's up to us to make an artful decision where to record this dynamic wall of sound pressure. What's more, at one meter these non-coincident SPLs won't triangulate like they do at our 12 or 15 or 20 foot listening depth. Even incorrectly assuming that they merge together evenly, where can we reasonably expect to sample their output, knowing that a visual record of these points correlates well with what we hear when evidence says it doesn't?

      But drivers don't even dissolve evenly into one another either, which is one of the most challenging aspects of design. They merge together as a series of complex, shifted patterns, much more complicated than rings from pebbles in a pond, bent and twisted by the acoustical phase and geometry of their inter-driver transfer functions.

      Now where will we place a microphone, knowing that this field of patterns and cancellations is this variable and this inconsistent with axis? Can we average these behaviors, drawing on ten, twenty, or a hundred points? Will an average of them represent the single, intended listening axis FR generally depicts?

      We're left with a difficult question: Can we reliably correlate the complex speaker with its measured FR snapshot and can we reliably connect its measured FR snapshot to its sound? With nothing more than the simple FR function and one speaker, we're already deep into subjectivity. We have to decide what to measure and we have to come to some informed conclusion that that measured data is representative.

      (An excellent example of this variability is the nearly universal practice of aiming the loudspeakers we're listening to get the best sound from them. At one fell swoop we've overridden “objective” FR with a subjective need.)

      Comment


      • #4
        Continued...

        Frequency linearity. At the lower frequency end of its range a driver diaphragm generally operates pistonically, dispersing sound broadly. At the top of its range it may operate more erratically and chaotically as its diaphragm acoustically breaks up, typically with a very narrowed field of output. This change in response manifests across virtually all of the individual drivers in a speaker. It's also not limited to just FR but exists in virtually all speaker behaviors.

        If we measure driver FR, will we weight all of these frequencies the same? And given that any driver or speaker will operate and therefore measure differently at different frequencies, how will the loudspeaker design equalize and/or normalize these respective frequencies?

        As noted, this variance involves distance too so at what depth will we standardize our normalized data capture (especially when the entire speaker is physically large and has many emitters)? Can we expect an amplitude at X frequency to contain all the same distortions as the exact same amplitude at frequency Y at every point in space? What if a driver within the speaker measures flawlessly at that frequency and at 1 meter but not at the intended listening distance? Will we alter its natural response to “correct” the initial response? What is its natural response when it changes with frequency and distance from the speaker?

        What about line radiators or partial line radiators? They engage a much different loudness/depth equation than regular drivers and speakers. How about plane radiators like large electrostatic panels? They deviate even more severely from the propagation smaller, conventional point-source speakers exhibit that become the foundation of our assumptions about FR.

        How can a speaker deviate so much from itself when I change these variables? Likewise, what about FR differences due to level change? Does a driver and speaker measure the same at 75dB as it does at 85dB? Will a small speaker that measures better than a large speaker also sound better than the larger speaker if the larger speaker has other benefits relative to its greater acoustical size?

        Crossovers and filters. Our natural assumption is that FR should be as close to a flat line as possible. However, transfer functions between multiple drivers introduce the spatial, geometric complexities of their summed radiation, and we should consider that not all crossover types produce a flat, one-axis response. (The A2.4 has one.) Perfectly usable inter-driver transfer functions exist - some specifically chosen for their real, audible benefits - that do not sum to a perfectly neutral FR.

        To know what to look for visually we'd first need to know what the designer was doing acoustically.

        There is typically more than one way to reach a functional goal and each one has pros and cons. Over-engineering a filter network for flattest response could be detrimental to good sound and the better-looking response isn't guaranteed to be the better-sounding function.

        There are a hundred ways to set up multiple drivers in a system aimed at a simple, static, summed response – will they all sound the same, and if not, how do we assess their real, audible differences from just the plotted response? I can run thousands of summed response iterations of a single crossover type. Can I say from looking at all this acoustical modeling which one, if any, sounds right?

        Flat, on-axis FR is a fair design goal but it is still just one design goal and not a universal goal for all types and styles, and with them, all sounds. As completely trivial as it is to acquire a neutral on-axis FR to within a Decibel or less, it's unwise to assume that all FR are intended to be flat or that even among a dozen flat responses, that all sound identical. I've never once found that to be true.

        Comment


        • #5
          Continued...

          Sighted bias. So far we've dealt with some of the complexities and issues of simple FR dependency. However, if our goal is a metric for real speaker sound quality, we're also faced with any preconceptions we may have associated with speaker data without real basis. Perhaps foremost is how, if our goal is genuine objectivity about a speaker's sound, we avoid overemphasizing FR to either consciously or unconsciously override our sense of hearing.

          Put another way, if we conclude we cannot trust our hearing concerning a device designed to be heard, how will we avoid biasing ourselves with visual data? Could we trust that X type of FR or other measured speaker data sounds like X when in reality it can only sound like itself? Could the sound of a flat measured FR be the sound of a specific speaker and not the sound of real neutral sonic output?

          This potential for bias could be a problem for those of us with objective sensibilities, one compounded by the next one on our list.

          Pragmatism and switching domains. Having identified the problem of sighted bias, specifically how would we weight visual input - the plotted FR – alongside real aural input with the goal ostensibly being to serve just that aural perception? The graphical output hinges on a very specific, time-independent, sophisticated, and very limited data type(s) (see above), while the aural human experience includes all speaker behaviors, outputs, and characteristics at once, presented to a profoundly acute stereophonic auditory system, albeit one without the plotting function to chart and store these behaviors and characteristics.

          Which of these will we value, or in what ratio and to what extent shall we value them, all the while remembering that we will be choosing, consciously or subconsciously, to serve hearing or to serve sight; personal perception or academic, purported objectivity; the real or the interpreted?

          Yes, we certainly may elect to make speaker design and output an academic pursuit – that's not my aim but I regard that it is still an aim – and I respect those who have done just that.

          The challenge with data isn't just type, method, complexity, condition, and variability. The challenge also involves the philosophical issue of the beliefs that inform our choices in an associative framework. Speaker data is a relative decision as much as it is assumed to reflect or be an objective measure.

          Comment


          • #6
            Continued...

            The order-of-importance misconception. FR has gained primacy. For whatever reason, it's pushed other data aside to become something it wasn't intended to be, which is a snapshot of speaker quality. With this problem comes another related to it, which is over-emphasizing FR at other points in the listening sphere.

            In other words, the uniformity of the speaker's off-axis FR may become nearly as valued as what's hoped to be its flat on-axis FR. If neutral FR is king, would go this logic, then the average FR elsewhere in the room – the speaker's average acoustical power – must be queen.

            The problems with this are just as evident as with simple axial FR. All it takes to challenge this belief is a mental experiment involving a real single-point speaker, a design that crops up from time to time in attempts to perfect coincident, coaxial acoustical radiation and sound pressure from more than one driver type approximately sharing the same physical origin. These types generally raise other questions, while not solving the remaining issues that plague non-coincident speakers except that they have some semblance of doing what they do from an approximate single point.

            Does the single-point speaker sound better? Is it more "accurate"? That remains unanswered. Overemphasizing soundfield linearity without full regard for intrinsic sound quality cannot logically prove good sound. The two are different; they are not the same thing. Remembering the vagaries of summing multiple, non-coincident drivers at an arbitrary, chosen point in space that we explored above, it's fair to ask if consistent, constant acoustical power is automatically synonymous with very high quality, low distortion sound. I'm not challenging single-point speakers; I'm highlighting that sound field is not the same as sound quality.

            Interestingly, proponents of coaxial radiation may overlap proponents of a third high-ranking claim on good sound, which is the acoustically-treated environment. While there's a solid, confirmed argument for controlled acoustic re-radiation in the listening space, primarily seeking uniform, coincident-source or constant acoustical speaker power in a treated space could seem contradictory, at least without significant qualifiers.

            As many very experienced audiophiles have found, excellent sound from extreme speaker systems in minimally treated spaces is not uncommon. Their argument is that the speaker's first-arriving waveform contains the most important cues for the ear, a view with considerable appeal. Here we see that the argument is actually for low distortion sound and not just consistent, uniform distribution or external management of any sound.

            Nevertheless, research appears to show that smooth acoustical power is beneficial and I'm not questioning it as a principle. Since the 80's I've always sought as consistent and usable off-axis acoustical power as possible, just not at the exclusion of other design feature-benefits. I'm questioning if it's a principle that should displace others, and if it's wise to simply assume that evidence of it is synonymous with sound or quality. All other things being equal it's worth having. But all other things usually aren't equal. Balance is key.

            The difference between design and sound per FR. Chane invariably designs for a flat-line FR when the design's fundamentals allow for it. Just as invariably we knowingly deviate from it in the final tune. The most visible deviations in the plotted data from the models depicted in this thread are there because they sound right(er) that way, not because the relatively trivial task of rendering them table-top flat sounds better. If a design is specifically aimed at attractive goals that do not produce the simple, flat-line FR characteristic, excessive fealty to FR may compromise sound quality.

            Dead-flat, FR-forward design exists in a different domain than musically-authentic sound. I emphasize that they may or may not share the same mental space – to make this point it doesn't matter - but we can only logically conclude that one of them sounds flat and the other looks flat. Linking the two is our challenge.

            Flat response is not necessarily perceptibly authentic sound. Design is not sound and vice versa. Even if they were to overlap, logically we cannot conflate them. They're different.

            The missing data fallacy. With these points in mind, the argument that data directly or tacitly validates the design or the designer in my opinion eventually falls flat. There seem to be good reasons to not substitute data as a significant, published criteria for evaluating a loudspeaker, whether for its sound or its putative technical prowess or lack thereof.

            What if X is specifically designed to sound authentic and does sound more authentic than Y, while Y has the seemingly more attractive measured response? There are times when our assumptions about data could mislead us. If we use data exclusively we should realize that we're doing so, and that we've potentially demoted the ear. This does not rule data out (I've used data since the 80's) but it challenges it as the assurance of good sound it may have become. It may even be a cornerstone of good sound but it is not good sound itself.

            Not automatically publishing data it is not a failure, it is an invitation to listen without bias, expectation, and preconception and it's a respect to the user's experience.

            Comment


            • #7
              Continued...

              The smoothed data fallacy. Just as FR is sometimes assumed to be an automatic benchmark of sound quality, smoothing the raw FR data is sometimes assumed to corrupt it. However, smoothing raw FR data slightly is how to make a speaker's plotted FR more meaningful – judicious smoothing reveals the presumably audible spectral FR trends hidden inside of excessively noisy raw data.

              Used this way, smoothing focuses the graphical view of the data to let us see how variations from flat might actually affect what we hear. If raw, unsmoothed data is useful, than we have as much obligation to interpret it into sound as we do for FR in general, a debatable task and purpose if we can't isolate what's meaningful and what's not.

              In the extreme, promoting any or all raw data as a test of data integrity implies (or accepts) that we can't really gather anything meaningful from processing it. That's not a view held by any number of sciences where meaning has to be deduced from the raw record and its raw data.

              A minor corollary occurs when we contrast a zeal for raw, unsmoothed data with our parallel affection for flat FR lines on graphs. At some point all data has to be processed for meaning and that processing always occurs arbitrarily. Data itself is a human choice; a subjectivity.

              The flawed ear fallacy. This was touched on already. If speakers are for hearing then a prime criteria should be actually hearing the speakers. It seems illogical to demote the ear from hearing and doubt it to the extent that the loudspeaker becomes an almost academic, data-driven pursuit, especially given the real, physical problems of imperfectly translating its sound from data.

              The assumption that the ear is this flawed is also unfalsifiable enough to make it an assertion more than a truth. This assertion appears in a number of ways and places when a listener's reaction, it's felt, should be conformed to some of the biases in favor of data; this bias exists in the reverse too, where an interpretation of the presumed sound of the device behind the data marginalizes the real sound and the data.

              At the least the assumption that the ear is this flawed should be balanced with the reality that the data is also limited. If both are limited, what is a loudspeaker's real purpose?

              This doesn't mean that speakers can't be built and sold for purely theoretical reasons. It just means that there's a logical contradiction in conflating academic exercise and an individual real experience, just as there is in believing that what I'm hearing is authentic because the data says it is or that what I'm hearing isn't allowed or true because I can't find it in the data.

              Comment


              • #8
                Art and science

                Art and science. Despite this, data certainly isn't flawed for its own sake. If it is to serve the ear, all audio engineering must primarily involve scientific rigor, data, and the known, proven engineering framework. None of this brief commentary is intended to limit or short-change science. However, in the end the nature of audio design is as dependent on an artful blend and balance of innumerable human variables - why tweeter X works better than tweeter Y with midrange Z or why acoustically-larger works better in an application than acoustically-smaller - such that the hard facts of science can inform that art. I'm just trying to put into a layman's perspective the sophisticated, essential, and potentially meaningful toolbox that data offers and relate it to its real, known limits.

                These are only some of the questions raised by our assumptions and assertions about loudspeaker data. Much of the casual, conventional wisdom about measured speaker behavior hasn't the depth to explore its challenges nor should it be expected to. Rather, audio enthusiasts are probably best off to listen first and then consider what used to be just laboratory intelligence as curiosities with, for them, somewhat limited real-world pertinence.

                A design goal and a particular plotted behavior are not at all the same thing as actually fooling the ear into the suspension of disbelief we presumably seek from reproduced sound. A design goal is just a presumed article of the art of design, whereas sound is a reality with a broad application for a surprisingly large cohort of witnesses: If one day we artificially recreate the sound of a violin, it'll be fairly indisputable to enough listeners to matter. Meanwhile the audio landscape is littered with impressive, textbook designs that failed the ear. More than a few people how why.

                Audio is an art, one strongly and essentially informed by and dependent on the sciences. It is not best served by either an excessively objective or excessively subjective orientation. Measured speaker response is crucial but it remains a toolkit, not a universal gauge. It's not complete enough to be the virtually complete indicator we sometimes assume it is.

                Comment


                • #9
                  Summary

                  It's fairly easily shown that data can't be more relevant to sound than knowledge of the data itself is – you probably have to know medicine to interpret X-rays. Still, even an expert data technician has no obligation to explore the higher levels of the broader art. The mechanic is not the driver, the equipment designer is not an athlete, and the naval architect is not the skipper. These roles may and arguably should overlap but knowing the technology's data is not the experience of using the technology's machinery. Where instruments designed to make sound for people go, at some point we have to give the listener first priority.

                  Likewise, the listener needn't be an engineer and an engineer's data should actually inform the listener purporting to use it. The question is if he can, exactly how will he, and to what end – is his goal sound or is it an academic interest. Still, even a substantial knowledge of data cannot be conflated with sound, and if even the flawless deployment of fairly complex data – data much richer than simple FR coordinate charts – can't assure the engineer of desirable or good sound, can it inform the layperson and listener of it?

                  Obviously, data is not sound; this almost goes without saying. Speaker data is exquisitely articulate visual evidence of the specific behaviors of an acoustical device gathered and processed by an equally specific and precise data system to be domain-shifted into one record and one kind of record at a time. It isolates and records slices of a much richer phenomenon, which is the action, behavior, and eventually, our experienced, aural impressions of a fairly complex transducing device with a host of varyingly perceptible aspects.

                  Data simply finds one of them, moves it elsewhere, and records it. Meanwhile, over where the speaker's real aim and utility lie - and very unlike a data system - our perceptions are simultaneously immediate, direct and directly meaningful, highly complex and in a way entirely complete, lie in other sensory domains entirely ... and are uncalibrated and cannot be recorded for later reference.

                  Measured, graphical data is just what it looks like: A set of coordinates forming an interpreted picture of the behaviors of a device with an aim in another domain. Data is reliable, historical, accurate, and essentially meaningful evidence of what are to some people highly relevant indicators of the behaviors of the device. But does data translate as a casual measure of the goals we would casually impose on it?

                  I emphasize that this isn't a commentary on any specific data-centric lab, entity, system, practice, or belief. Some will disagree and some will edit or correct it, which is perfectly fine and which I invite. I don't like the public debate on this subject and I even hesitate to post these thoughts here. But at some point I think it's fair and reasonable to wring some pertinence from a pursuit that seems to have morphed from the abstract and scientific into the global and subjective.

                  These remarks can't be definitive and aren't presented that way – they're a forum comment to put an endeavor into perspective. I'm not equipped or ambitious enough to write a book about measuring speakers. These remarks are not a synopsis of data, real utility, or systems themselves. They're informal remarks, for example, about a vacation, not an analysis of train engines or a paper on the specific principles of flight. Since realizing what data means to a listening consumer is the aim, this is much more philosophical than it is specifically technical.

                  As of today Chane has a tiny range of modest, inexpensive speakers. That should change as 2017 and 2018 unfold but even so, I'm making no great claims for (or against) any specific product. In over 30 years we've never paid so much attention to “objective” data that we compromised what we felt was real sound and I think this is naturally reflected in our user base. I don't expect either to change much in the future.

                  -A comprehensive resource on testing and measuring loudspeakers is Joe D'Appolito's Testing Loudspeakers, 1998. Another is the library of the Audio Engineering Society (AES).

                  Comment


                  • #10
                    Thank you Professor Lane. Always good have better points of reference and one I can point my friends to. As true 40 years ago as it is today, the best measurement instrument is the two on your head. We too often think something may sound great because someone told us or they are 15k speakers they must be great. Trust your ears, if the music embraces them that is the only true measure. It wont be the same speaker for everyone based on your listening characteristics (classical, rock, jazz, rap, techno) or volumes, but the thing I find is you will know it when you hear it.

                    Comment


                    • #11
                      Originally posted by 1st Time Caller View Post
                      Thank you Professor Lane. Always good have better points of reference and one I can point my friends to. As true 40 years ago as it is today, the best measurement instrument is the two on your head. We too often think something may sound great because someone told us or they are 15k speakers they must be great. Trust your ears, if the music embraces them that is the only true measure. It wont be the same speaker for everyone based on your listening characteristics (classical, rock, jazz, rap, techno) or volumes, but the thing I find is you will know it when you hear it.
                      Thanks 1stTC. As they say, when it's right it's right. In the case of good audio, there can also be a "thresholding" effect where incrementally improving the gear improves the "technical" sound - bass, treble, midrange, and all that - up to some point after which it simply musically suspends enough disbelief that you forget you're hearing it, even if for a short while or only in some areas in the sound and not others. There's the sound of a device and there's the sound of no device. Very hard to do but very fun if you ever pull it off, even in degrees.

                      How this could ever translate into a static measured response completely escapes me. I've heard it from stuff that didn't stand a snowball's chance of rendering it, and I've not heard it from stuff reputed to be all sorts of things like accurate and neutral.

                      Funny thing is plenty of folks I know relate the same experience. There seems to be a strain of it. The point is that not only are ears somewhat involved in audio but when they come across something they really can't help but fancy, nobody (that I've met) knows what it really is...

                      Whether that ever translates into $200 speakers I'll leave up to the individual but it remains a phenomenon and an aim.

                      Comment


                      • #12
                        Jon, I could not have said it better myself if I tried.

                        Until the art of analyzing audio signals significantly advances, "measurements" will still fail to properly contextualize what we hear in real listening spaces just as much as car specs fail to specifically predict lap times on race tracks. They are still only general indicators.....so much more to measure.

                        Until Laser Scanning Vibrometry, Klippel analysis, and high resolution measurement techniques (96khz/192khz/384khz measurements at 32bit bit-depths) become more common, and more obtainable, the science of measurement will continue to fail to have reproducible, predictive ability beyond general statements of what is generally pleasing to the average listener and what is generally not.

                        Being an audio 9and video) editing professional, I routinely analyze audio down to the sample level, and I propose that for a transducer (or system of transducers) to be accurately measured in an anechoic environment (which is impossible to obtain aside from a few places on the planet), a sort-of Nyquist theorem must be applied. I propose that an oversampling of 4 to 8 times the sample rate of the original test signal must be achieved before true scientific analysis of driver behavior can happen...and this must occur comparatively, and in real-time.

                        Put simply, we're just not there yet.

                        And, even when we get there, it is my professional view that we should be using this gathered information to reinforce our own ability to more accurately perceive sound than the other way around, which is being continually dependent upon sighted measurements. It would behoove the audio community to cultivate a musician-like ability to help enhance our listening ability as we confirm what we do and do not hear using sighted measurements....so that we can then listen more accurately.

                        It would seem that many audio enthusiasts, and some professionals, are advocating that we continually discard what we hear (and continually discount it) as unreliable, in favor of ever consulting sighted measurements. Even to the point of ignoring what we hear.

                        As in a neophyte musician, that may be good for a beginner if they are not accurately playing a note. But, one must eventually leave the music room and LEARN TO HEAR ON THEIR OWN. This doesn't mean one cannot diligently verify the calibration of their instrument (as musicians verify tuning before each performance). But, again, this is to bolster their own ability to accurately hear.

                        It may be time to detach ourselves from the metronome, as it were, and learn to reconcile what we measure with what we hear, as opposed to put the two at odds with each other.

                        Comment


                        • #13
                          Thanks, BTJ. It's good to know that an audio pro finds some relevance in all that. In that respect your experience betters mine.

                          The point about resolution aside, the challenge with reconciling data with our sensory apparatus is largely that of translating between domains. Data isn't a facsimile with which to do that; it's a slice of a behavior or characteristic. It's even a proof of a design point - and a powerful, essential one - but not of what's heard, at least not in a global sense. It's a detective, not a composer. It investigates but it neither explains or creates the big picture phenomenon it relates to.

                          You can certainly adjust a variable, remeasure, and correlate apparent sound with changing data. That's simple enough that it becomes rote and automatic. But I can't identify an authentic sound using data abstractly. I can't say: there it is because I have data.

                          My points above applied mostly to the problem of assuming that those domains transfer seamlessly and, in the global, audible sense, meaningfully. They certainly apply to one another but they inform one another only somewhat. In my view, conflating them just because they can be conflated is logically flawed.

                          Thanks again...

                          Comment


                          • #14
                            Originally posted by Jon Lane View Post
                            Thanks, BTJ. It's good to know that an audio pro finds some relevance in all that. In that respect your experience betters mine.

                            The point about resolution aside, the challenge with reconciling data with our sensory apparatus is largely that of translating between domains. Data isn't a facsimile with which to do that; it's a slice of a behavior or characteristic. It's even a proof of a design point - and a powerful, essential one - but not of what's heard, at least not in a global sense. It's a detective, not a composer. It investigates but it neither explains or creates the big picture phenomenon it relates to.

                            You can certainly adjust a variable, remeasure, and correlate apparent sound with changing data. That's simple enough that it becomes rote and automatic. But I can't identify an authentic sound using data abstractly. I can't say: there it is because I have data.

                            My points above applied mostly to the problem of assuming that those domains transfer seamlessly and, in the global, audible sense, meaningfully. They certainly apply to one another but they inform one another only somewhat. In my view, conflating them just because they can be conflated is logically flawed.

                            Thanks again...
                            Agreed on all points.

                            A couple things.

                            A) Correlation still does not equal causation.

                            B) A summation of what I wrote in my previous post is this: we should be using measurements to refine our hearing (learning to listen more accurately, just as a musician learns to hear tones/note/pitch more accurately) as opposed to becoming essentially co-dependent with sighted measurements, where it is advocated that we are to routinely discard our own observations in favor of "measurements", as opposed to refining our ability to listen/hear. To be clear, I am an ardent advocate of working to correlate accurate, repeatable, peer-reviewed, and unbiased measurements to what we hear. But working toward that end and drawing the conclusion that to measure IS TO HEAR, is both fallacious and dishonest. Because, it is to assert that one currently knows all there is to know about audio. It's a very flat-earth kind of assertion.

                            In a car analogy, it would be to assert lap times based on detailed measurements and specifications. Taking measurements with gated tone bursts, at various axes, etc, does not equate to actual complex musical signals at a variety of output levels, connected to a virtually limitless number of different amplifiers (many of which can behave quite differently when interacting with the often complex impedance loads that actual loudspeakers present). So, it's truly an apples-to-axe-handles argument.

                            And, playing devil's advocate, I suppose it is possible that creating a culture where people are encouraged to be the opposite of a trained musician (instead of actually advancing one's knowledge, ever a beginner listener) and to continually discard their own ongoing experiences and observations ever in favor of measurements, provided by others, could possibly serve to advance the sales figures and bottom lines of a group of speaker manufacturers and their online cohorts, more so than advancing the best interests and best designs that most benefit listeners and the market in general. This group tends to advise these neophytes that, "If a speaker maker doesn't provide measurements, a) they must have something to hide, b) their products must be garbage, and c) their company is disreputable." I find this assertion, which is unfortunately common, to be both intellectually and factually dishonest, and to be incredibly misleading to the average enthusiast.

                            Especially dangerous when we consider that cults of personality are often cultivated around many of the advocates for the loudspeaker-measurements-or-bust crowd. I don't think this is necessarily advocated by those same personalities as much as it is Orwellian groupthink at work in the minds of their followers. Regardless, it is no less harmful or chilling to the intellectual honesty of the audio community at large. I may be alone, but personality is simply not a factor in my pursuit of truth, facts, or honesty. I think that cults of personality, in general, cause me unease. Call it the latent historian in me, lol.

                            I find it to be much more healthy to encourage people to do their own thinking, and listening, as opposed to offloading that process onto somebody else who very often hasn't even heard the speaker in question, and never in the actual listener's room.

                            I'm a teach-a-guy-to-fish kinda enthusiast, I guess.

                            I see much intellectual and scientific danger in instructing an entire "generation" of listeners and hobbyists to wholly ignore their own ears and listening observations and, instead, to 'trust' the consistently inconsistent measurements of somebody else (who is virtually never an uninterested third party, whether through professional, industry, or personal association with those mentioned above).

                            Comment


                            • #15
                              Originally posted by BufordTJustice View Post
                              A) Correlation still does not equal causation.

                              B) A summation of what I wrote in my previous post is this: we should be using measurements to refine our hearing (learning to listen more accurately, just as a musician learns to hear tones/note/pitch more accurately) as opposed to becoming essentially co-dependent with sighted measurements, where it is advocated that we are to routinely discard our own observations in favor of "measurements", as opposed to refining our ability to listen/hear. To be clear, I am an ardent advocate of working to correlate accurate, repeatable, peer-reviewed, and unbiased measurements to what we hear. But working toward that end and drawing the conclusion that to measure IS TO HEAR, is both fallacious and dishonest. Because, it is to assert that one currently knows all there is to know about audio. It's a very flat-earth kind of assertion.
                              Yes, and the casual pro-measurement movement is well into fallacy if it equates excellent sound with the mere presence of measured data, which happens. Also incorrect is the presumption that because a particular speaker has a particular on-axis/one-condition response that it sounds good (by now I'm repeating us both). That's a sighted bias.

                              There is scant real correlation* between these things and no logical correlation between them. They exist in different domains.

                              What I didn't emphasize in my first remarks was the array of ways to obtain a "flat" response (which is a trivially-easy pursuit). Among them are ways to get such a response that actually prevent a genuinely flat "group output". Put another way, flat one-axis/one-condition responses can easily have compromised, little, or even no other real, audible linearity in conditions other than that single event, while in that single event they can actually tend to sound deficient.

                              There is neutral sound and then there is a flat one-axis/one-condition target. And then there is the neutral, linear, audible "group target" output, which balances all aspects of the output against the inherent compromise that is the multiway loudspeaker. Therefore, without a thorough analysis of a speaker's entire family of responses and how they derive from all of its transfer functions in all domains, the depth and scope of which I don't think I've ever seen given away in public, we just cannot assume that X equals Y, especially that data directly translates to phenomenon and back again. One simply is not the other.

                              It's absolutely true that correlation shall not equal causation, and in the case of this point, that a single aspect equals a comprehensive, global behavior. It cannot. In fact, a single, flat, one-axis/one-condition aspect has as much likelihood or more of conflicting the global behavior as enabling it. The missing element is how you get there and why.

                              *I'm aware of the premise that unsighted comparisons return a scientifically-reputed kind of preference cascade among a cohort - a group of listeners is found to generally prefer a method of technology and its ostensible behaviors over another method and its. However a problem arises with the limited device sample size and the limited technology sample size. In other words, it's impossible to test all available devices, and it's impossible to test all technology types within all speaker types - they're substantially incompatible, like steam engines in airplanes or two liter gasoline turbos in container ships. Further, it's not actually accepted that all technology categories - price/size, for example - shall be tested for absolute "objectivity" versus all others.

                              Here again we have data but we have arguably insufficient correlation between its inherently fragmented makeup and the whole range of available options. We can no more find a test that assesses Lamborghinis and Volvo semis equally (meaningfully) than we can all other vehicles in all (or most or enough) other technological combinations.

                              Such a preference and information cascade should be able to hold up to scrutiny and I'm not sure I've ever seen that done...

                              Comment

                              Working...
                              X