Electronical and Electromechanical Explorations

This blog organizes and presents some of my various projects and musings related to taming wild electrons and putting them to work. Projects are listed down the right side of the page.

Wednesday, January 14, 2015

Signal Generator 1: High Level Design

One piece of test gear that I need is a reasonably-capable signal generator, for testing the response of circuits (in combination with an oscilloscope, usually).

It struck me that a project like this is a perfect excuse to get some experience with FPGAs, so I thought I'd start out by sketching out a high-level design that I can use for component selection and so on.  It seems like a rather straightforward task, although as always the devil is in the details -- and everything looks easy until the reality sets in!

My basic requirements are:
  • 2 channels of output
  • 200 MSPS output sample rate
  • +-/- 10 volt output range; reasonable fidelity down to 10-ish millivolts peak to peak
  • Enough output current to drive a 50-ohm transmission line (plus a bit more).  250 mA is probably enough, 500 would be nice
  • Basic set of signal types, including pulse trains, amplitude modulation, etc.  Details to be worked out later
 The basic plan is to use an FPGA to generate the signals in the digital domain, either on-the-fly or (if necessary) by filling a memory buffer... with a provision for accepting an arbitrary waveform buffer from an external microcontroller.  Then feed the digital signal to a DAC, which then goes through an amplification and buffering stage.  To maintain accuracy, there will be an ADC for measuring output signals during a self-calibration process (and possibly during normal operation).

Here is a block diagram showing all the parts (just one output channel shown for simplicity).  Below the diagram, each part is briefly explained:
  1. Power:  Regulators to provide all the power rails needed, from a +12v input
  2. LCD for nice UI
  3. Touchscreen to simplify setting complex parameters
  4. Buttons and knobs for functions that operate better from hard controls
  5. External communication (USB probably)
  6. Accurate low-jitter clock generator to synchronize the MCU/FPGA/DAC and drive conversion
  7. Microcontroller for UI, communication, and FPGA control
  8. FPGA: almost certainly I will use a Spartan 6, which is pretty powerful, inexpensive, and solder-friendly
  9. DAC: I picked up a couple cheap DAC5672 on ebay: dual 14-bit 275 MSPS in easy TQFP-48 package
  10. Current to voltage conversion
  11. Filter to exclude high frequencies (which come from "steps" in the conversion).  Still want square waves to be square, though, so it's a tradeoff.
  12. Amplification to output voltage range.  Ideally this would be externally programmable (shown here with an auxiliary DAC), from -20dB to +20dB.  The details need to be worked out.
  13. Gain control for the amplifier
  14. AC coupling to remove any offset artifacts
  15. High-current output buffer.  Also moves the signal to its target common-mode voltage.
  16. Common-mode output voltage control for the output buffer
  17. Signal buffer for feedback measurement pathway
  18. Fixed attenuator to move the signal down to a range the ADC can handle
  19. ADC for signal measurement during self-calibration
  20. Solid-state relay to shut off the output (under user control and also during self-calibration)
  21. Impedance control for output transmission line
  22. Output jack (BNC)
This looks pretty complicated, but I don't think it should really be that bad.  Each element is relatively straightforward (not counting the FPGA magic...)

I have a little Spartan 6 development board, so the next step will be the agonizing component-selection process, after which I will construct a prototype of the output stage (all the stuff shown in yellow and blue).

Also I will work more on specifying features so I can think more carefully about how to implement it in the FPGA.  Since I haven't ever used an FPGA in a project so far, it will be something of a learnign curve... but that is 68.2% of the fun!








Friday, January 9, 2015

Oscilloscope: Architecture

As promised, here is the a sketch of an architecture for a digital sampling oscilloscope.  I want to play with this architecture with a view towards building one for my lab, primarily as a pleasant hobby / learning experience.  I'm not sure that its appropriate for all levels of oscilloscope implementation -- the traditional DSO architecture model is pretty good for low-capability hardware, and I may end up using that for the oscilloscope watch project I'm noodling around with.  And, in practice I am sure it will get tweaked either for performance optimizations, to take advantage of hardware resources, or to support certain features.  And finally, this architecture only discusses the basic processing flow of a single channel and doesn't include any info about auxiliary features -- which aren't really relevant to the central model.

This design sketch is mostly high-level but contains a few bits of detail that I thought were interesting and relevant... not very rigorous, but eh...

One last thing:  This architecture is "soft" in nature -- implemented either as software running on microcontrollers and/or larger processors, as configurations in FPGA chips, or on a GPU.  Which parts go where is TBD and may depend on the requirements of a particular implementation.  The analog front end is of course very important and interesting, but is beyond the scope of this discussion.

Basic points of the architecture:
  • Separate processing into three parallel components:  sampling, triggering, and display
  • Samples are stored in a very large hierarchical circular memory
  • All triggering occurs by processing sampled data (rather than in analog circuitry)
  • Display functions reside in highly parallel line/polygon-drawing hardware

Sampling

As a first pass, sampling always occurs at a single (maximum) rate.  In practice that might be modified in unusual cases requiring very long data sets such as multiple seconds per div or persistence times longer than 5-10 seconds or so -- but for now consider the sampling rate to be fixed.  Two reasons for this:
Retaining full-rate data means maximum zooming is always available
Memory is (relatively) cheap, at least in the quantities needed.  16GB of DDR3 SDRAM costs about $100 so memory cost per se is not a major consideration.

Regarding SDRAM, DDR3 supports a bandwidth of more than 10 GB/sec and DDR4 is significantly higher, so a single memory is probably sufficient, but a "striped" array of memories could be used if needed.  I don't have the skills to construct circuitry to process multiple-GSPS systems anyway, but I am confident this memory-intensive architecture could scale if needed.

Sample memory is stored in a hierarchical structure (see figure).  The top level contains the raw samples.  Subsequent layers correspond precisely to user-selectable timebases to avoid scanning large amounts of memory to create low-time-resolution waveform displays.  The downsampled buffers contain more than single values -- probably <min, max, mean> from the corresponding data higher in the hierarchy.  Using this extra information allows rich detailed displays that avoid aliasing errors.

The sample hierarchy is created as the samples come in, probably in an FPGA front end using parallel resamplers if necessary.

Triggers

Triggers are stored as a list of pointers computed by (probably parrallel banks of)  data scanners -- initially as data is sampled and subsequently if trigger criteria are changed.  During trigger recomputation the hierarchy can be traversed bottom-up to focus on potential trigger points.

Computing (for example) a persistent display involves traversing the relevant triggers in the list and accessing the appropriate corresponding sample memory.

Triggers can be arbitrarily complex and since they are largely independent can be computed in parallel.  For real-time display implementing them with an FPGA probably makes the most sense, but when operating on a snapshot they can use more complex algorithms (at a cost in time).  For example, an "anomalous waveform" trigger could gather statistics about every waveform in the time frame of interest, then compute some distance metric between an average and each waveform in turn.  Similarly, algorithmically detecting "runt pulses" could create a trigger list of all runts in the sample memory, after which a persistent display could show all of them at once to get a view of their characteristics.  The huge sample memory makes this type of detailed post-analysis possible.

Summary features of the sample/trigger representation:
  • Full data is always available (zooming in to captured data always provides maximum detail)
  • Trigger points quickly accessible and easily recomputable for different views on the data
  • Since all data is captured continuously, there is zero dead time
  • Information required for optimum display at any timebase is immediately available; this information avoids aliasing errors in display
 Display

A simple snapshot display of captured data is obviously trivial:  just read the data from the appropriate level of the hierarchy and display it. Of more interest, though, are displays that combine the data from multiple triggers.  The analog-inspired "persistent" display is one such; others might include displaying an "average" waveform (itself perhaps superimposed over individual waves), displaying "variation bands" (min/max) of all triggers, etc.

With the large sample buffer, operating on snapshot data instead of real-time is just fine for many analysis use-cases. 

In addition to the flexibility of display, aggregate-display modes such as persistence will be available if desired while scrolling and zooming through the sample buffer, though probably with some degree of lag depending on the number of triggers to be displayed.

Displaying a Million Waveforms per Second

Creating a display in real-time combining a million triggered waveforms seems like a daunting task, but it is pretty easy to throw hardware at the problem.  Slightly oversimplified: if a waveform consists of a sequence of N sampled data points  <t, value>, it is trivial to break this up into n-1 line segments connecting the sequential data points.  For each one:
  1. Convert the <t, value> endpoints into screen coordinates <x, y>
  2. Draw a line segment between the two points
 If we choose the algorithms with a bit of care, the drawing processes for each segment of each waveform are completely independent of each other, which means they can be run in parallel... and that's a LOT of parallelism!  There are two basic ways to parallelize this.  The easiest one is to just distribute them to a set of processing elements without caring where the line segments go.  If each of the line-drawers has equal access to the entire display buffer, that should work fine.  For some computing hardware, that kind of equal access might not be possible though.  For example, if using an FPGA to do this, we might be able to store the display memory in scattered bits of "block ram" inside the chip itself, which makes access really fast -- but we'd only want to allow certain nearby computation units to have access to the block ram for a particular screen region.  In this case, it would be better to have dispatchers sending individual segments to the appropriate "processors" (possibly breaking up the segment into smaller pieces first).  That is an implementation detail, though... the point is that it will be pretty easy to display enormous quantities of data using massively parallel line-drawers.  Design options include FPGA fabric and conventional PC graphics cards (among other choices).

One important wrinkle worth mentioning:  As exactly stated above, the drawing uses linear interpolation.  In practice, it will be desirable instead to use sin(x)/x interpolation instead, especially at maximal zoom levels.  From what I can tell so far, this is mainly a filtering problem and breaks up into parallel tasks just as easily as the line drawing itself, so it shouldn't be a major issue.

So that's the basic architecture! 

Next I want to look into whether I can use a version of this architecture for my goofy oscilloscope watch (I suspect not).  Otherwise, it's time to start planning the design of a "real" oscilloscope for use on my bench!






Oscilloscope: Waveform Update Rate = Confused Mess

Continuing from my last post, Keysight's oscilloscopes with their MegaZoom asic and the million waveform updates per second it empowers, are very cool, but as somebody looking to design and build my own scopes, something still smells funny here.

All of this is still based (for the most part) on the Standard DSO Model (SDSOM) of oscilloscope architecture, which itself is based in significant detail on the operation of electron-beam-slinging analog oscilloscopes.  With the technology available these days for building scopes, I'm not so sure that model is really appropriate any more.

Don't get me wrong:  I like analog oscilloscopes.  I have an ancient Tek alongside my crappy Owon, and for lots of things it is a pleasure to use.  But a digital sampled signal analyzer is a different device and although the display of an analog scope is (for many common uses) still pretty much optimal, the path to get there is quite different and maybe it should work differently!

The very idea of Waveform Update Rate as a metric is based on the processing model inherent in the SDSOM, and it is kind of stupid.

First of all, in the era of persistent displays, I am not sure exactly what it even means.  Obviously the LCD display itself cannot be updated a million times per second.  I guess it amounts to the number of times per second that something could contribute to the display (eventually) shown on the screen?  If what we really mean is "triggers processed per second"  then we should call it Trigger Rate!

In the SDSOM, the model is basically:  Trigger -> Sample -> Update -> Trigger -> Sample -> Update (etc).

Keysight calls the "Update" part, during which data is ignored, "Dead Time".  Which sounds about right.

They have moved away from this model in some ways.  At least they have gotten rid of the idea of a distinct Capture Buffer that the user needs to fiddle around with to make tradeoffs between captured data and update rate.  Old hands used to things working that way may feel a little adrift without that setting, but what a dumb thing to have to think about!  I'm not sure exactly how they got rid of it -- maybe they just pick a value that makes sense and hide it all behind the scenes... but I hope it's more than just that.

Consider the Keysight App Note I mentioned last time:  (5989-7885EN): "Oscilloscope Waveform Update Rate Determines Probability of Capturing Elusive Events".

"For example, at 2 ms/div the scope's on-screen acquisition time is 20 ms.  If a scope had zero dead time, which is theoretically impossible, the absolute best-case waveform update rate would be 50 waveforms per second (1/20 ms)."

I call BS on the thinking behind all of this.  It makes sense in the context of the SDSOM, but not so much if we take a step back and think of the problem outside of that architecture.  Two reasons for the BS-calling:

1. (Less important) There is no theoretical reason for us not to update more than once during the on-screen acquisition time -- which means triggering more than once during the time the data is displayed.  Suppose that the display is showing 5 waveform cycles.  There's no theoretical reason we shouldn't trigger 5 times.  In the case of a persistent display (or other aggregate view), we would end up redrawing the same data at different places on the screen, but so what?  What we want is for the different triggers to be aligned so they can produce a data-rich view, and the fact that one cycle happens to be visible also off on the right of the screen somewhere is irrelevant.

2. Zero dead time is certainly not theoretically impossible if thinking outside the SDSOM; dead time is an artifact of an obsolete architecture.  There is no theoretical reason that makes it necessary to stop sampling or stop triggering just because we are busy drawing!

The principle behind these objections, and the basis of an oscilloscope processing model that I want to pursue, is that the sampling, triggering, and display processes are inherently independent (although obviously related) and operate in parallel.

This idea, which is hardly radical, leads me to a sketch of an oscilloscope architecture that I would like to play with.  I will present that in my next post.

Oscilloscope: Keysight and the SDSOM

Oscilloscopes are cool.

As I tinker with ideas for designing and building them, I've been reading about some existing scopes, and (as a result) have also been lusting after the Keysight (nee Agilent nee HP) InfiniVision X-series scopes.  Pretty awesome stuff!  And also a great example of technological advancement:  By rethinking the model of how a Digital Storage Oscilloscope (DSO) works, they came up with a fundamentally new design which led to breakthrough performance improvements.

They are a little (well, a lot) out of my price range, but if I am going to build something I'd like to build something snazzy, so it's a great source for inspiration.

I became curious about how performance of a scope is measured.  Besides obvious things like sampling speed and fidelity, and analysis feature lists, a very interesting performance metric is the fraction of incoming data a scope can actually process.  I was a little surprised by this, but it does make sense: a scope only looks at a fraction of the incoming data.

I came across a clearly-written App Note from Keysight (5989-7885EN): "Oscilloscope Waveform Update Rate Determines Probability of Capturing Elusive Events".  It got me thinking... but first, let me describe the Standard DSO Model (SDSOM), which is closely related to how analog oscilloscopes work, and is still dominant -- especially on crappy cheap scopes like the Owon I have on my bench:

A scope display shows a particular length of time, expressed in time-units per "div", where a div is a gridded section of the display.  Typically there might be ten divs visible horizontally on the screen.  This rate is called the "timebase" and is one of the most important settings on a scope.  The user will crank the timebase knob to see an amount of time that gives a good view on whatever signal she is investigating.  For example, a timebase setting of 10 microseconds per div (in my example here) would result in 100 microseconds of data being shown on the screen.

In the SDSOM, the timebase determines the sample rate.  If we need to show 10us of data per div and a div is 50 pixels wide, a sample time of 10us/50 = 200ns (5 million samples per second) makes everything nice and easy.

In the simple normal case, some kind of little analog circuit that is watching the signal generates a pulse when a trigger occurs (because, e.g. the voltage passes some threshold).  The scope sees that pulse, says "AHA! Something interesting!"  It shuts off the triggering circuitry (because it doesn't want more triggers while busy processing this one) and begins sampling.  It grabs a certain amount of data and stops sampling.  That data (which I will call a Capture Buffer; not sure what the standard term is) gets stored in memory.  At a minimum, it should be big enough to fill the screen (100us in my example), but it is usually bigger so you can scroll around to see more data if you want to after the fact.  Usually you can select the capture buffer size yourself, or use some default size selected automatically based on the available memory or whatever.

Still following the SDSOM, once the samples are secured in memory, the scope scans through the portion of the memory corresponding to the screen and fills a display buffer with a bunch of lines graphing that sample data, and also updates display elements related to the captured data.  It then updates the physical display with the contents of the display buffer.  Having done all of this, the scope is ready for more data, so it turns the trigger circuitry back on and waits for a new trigger pulse.

See that it takes time between getting the sample data and re-enabling the trigger.  Any triggers that occur while the trigger circuit is disabled cannot be processed.  So the performance metric is:  how much data is unavalable due to processing overhead?

The answer, in general, is:  almost all of it.  Really.  The scope only "sees" a tiny fraction of the incoming data.

You can measure this if your scope has a "trigger output" which sends an electrical pulse out a port whenever a trigger occurs.  On a low-end scope like my Owon, you might get a maximum of 50 triggers per second, give or take.  I don't really like this measurement method because it is kind of fiddly.  To maximize the rate you have to set the capture buffer to the minimum size available to minimize the time spent sampling -- THAT data shouldn't be counted as "lost" since it is available for viewing and analysis.

So in our example, we can capture 100us of data 50 times per second and the rest is ignored: 5ms of data per second processed means 995ms of data per second ignored:  over 99%!  And if our timebase was 100ns instead of 10us (implying a sample rate of something like 500MSPS), the trigger rate is still only around 50 per second max... meaning that 99.995% of the data is ignored.

It seems to be normal to call this trigger-processing rate the "Waveform Update Rate" -- which makes sense because it sort of means:  how often can the stuff shown on the screen be updated?

I'm certain that even cheap scopes could do better than that if they tried harder, but is it something we even care about?  So what if only a little bit of the data gets processed?  It's a constant stream so there is always as much data as we can possibly display coming in anyway.

There's at least two reasons we might care:  First, if we can display more data on the screen somehow, then we really do want more data available to display.  More expensive and capable scopes have methods like "persistence" which can actually display the data from multiple triggers on the display at the same time (mimicking the phosphor persistence of analog scopes).  That is useful because you can see the variation between different waveforms (and it looks super cool too if it is done right).

Second, and related, has to do with rare events.  Often we use an oscilloscope to help debug hardware that shows flaky behavior, which may occur becuse of occasional glitches or rare variations in the waveforms.  If a glitch happens once a second on average and we only see 1/1000 of the data, we'd have to wait 1000 seconds on average to capture it, which is a pain in the ass.

But finding these glitches (and subsequently analyzing them) requires scope features that the cheapest scopes don't have (even if they did capture all the data):  how do you find the weird wave?  With a persistent display as mentioned before, it would stay visible long enough to actually see it instead of being overwritten 1/50 second after it was drawn.  The other option is to have fancier triggering mechanisms capable of distinguishing the glitch from a normal waveform.  Since (for example) my Owon really doesn't have any of that capability, there isn't much motivation to optimize the waveform update rate.  Eh.

But the Keysight scopes do have those abilities:  very sophisticated triggering AND a gorgeous persistent display.  So it matters to them.  Still for scopes like that a waveform update rate of something in the ballpark of 10,000 per second has been pretty normal, which sounds pretty awesome!  At that rate, the processing overhead for each sample/update cycle is only a tenth of a millisecond!  But still, especially on fast timebases, the vast majority of the data comes in during those tenths of milliseconds.

The Keysight scopes can do up to a million waveform updates per second!  They do this with custom chips and other macho engineering, and it's one of the reasons they are so cool.

Amazingly, though, in common cases, even at a million updates per second 90% of the data is still unprocessed.  Maybe that's still a pity and maybe it's not too bad... only the rarest glitches become super painful to wait for when 1/10 of them get seen.  So thumbs up to Keysight!

But... something still bothers me about all this.  I'll write about that in my next post.

Thursday, January 1, 2015

ScopeMeter Watch 1: Requirements

Since I am on the path toward building electronics lab/test gear and since I am interested in miniature electronics projects, I decided that my first oscilloscope project should be a ScopeMeter Watch!  Who wouldn't want a watch that is also an oscilloscope?

So as I am trying to wrap my head around the requirements for all the things I need to cram into it, I thought I'd make an effort to list what seem to me to be the basic requirements.

In terms of raw functionality (features):
  • Two oscilloscope input channels, including:
    • Trigger capability
    • Adjustable attenuation/amplification
    • AC or DC coupling
    • Target sample rate: > 50 million samples per second
  • Basic multimeter functions:
    • Voltage measurement
    • Resistance measurement
    • Continuity checking
  • Power stuff:
    • Battery power regulated to necessary power rails
    • Battery capacity monitoring
    • Battery charging
  • User Interface:
    • Display screen:  UI modes for:
      • Oscilloscope
      • Voltmeter
      • Resistance/continuity
      • Time display
      • Settings configuration
    • Controls:
      • Methods for the user to control all the operations of the device
    • Audio:
      • Some way to report continuity test results
      • Also usable for alarms and other feedback
  • External connections:
    • Two plugs for probes
    • External power for charging
Here is a diagram to show all the stuff that needs to go into the design:

 This is definitely a goofy project, but an interesting challenge.

I am not so concerned about how great the oscilloscope is... Eventually I do want to make some pretty awesome test gear, but first I need to get some experience with the kinds of issues that come up...

Thursday, December 25, 2014

On Usability (rant)

In my last post, I concluded that I need to master the use of user interface components, especially displays.  And so, of course, I think you should too!

<rant>

Embedded products are famous for having crap usability.  That doesn't make it okay.

Typically there are (at least) two aspects of an electronic gizmo -- the internal circuitry (the technical bits under the hood) and the interface to its user.  This could be a programming or electrical interface if the project is a component for use in larger systems -- and usability is important for those things as well -- but that is not what I am writing about today.

Older engineers like me tend to get fossilized in our thinking and resistant to change.  For example, like it or not the technological advances and design work that led to the iPod, iPhone, and all that followed really did revolutionize the usability of handheld devices (the same way that mice and windowed displays revolutionized the usability of PCs).   Deal with it. 

If that idea doesn't make you unhappy, you probably can skip the rest of this post!

Certainly not every project needs to be burdened with a high-resolution touch screen, but maybe your project would actually benefit from that.  Look at the reasons you resist the idea... are they good informed reasons or stubborn biases or laziness?

Some issues:

Cost: These days, adding a touch screen to a project that otherwise doesn't even need a microcontroller adds around $20 (with some effort) to a hobbyist one-off project.  I don't know the cost in quantity, but obviously less.  Compared to the cost of whatever the alternative would be, it may or may not be more important than the usability benefits.

Difficulty: this is probably the main reason people avoid using modern UI components... it takes time to learn about and implement them.  And, since it is peripheral to the important bits under the hood, it can seem like silly overkill from a development perspective.  Realize, though, that this is only true until you add some abilities and experience to your toolbox and skillset.  And ask yourself whether you really want laziness to be the driver of your design.

Actual Usability Benefit:  If an optimal interface for your device is really one button and a three digit display, that is awesome!  Nice job focusing!  give your user a pleasant and responsive knob and readable appropriately-sized digits that are easy on the eyes and update at an appropriate speed, and you are done.  But features do tend to creep in... if you find that buttons are ending up with multiple functions in different circumstances, other buttons are switching "modes", you are cycling through different meanings for the numbers in the display, or if multiple settings need to be manipulated using that same three digit display.... stop!  You have probably evolved a crap UI...
  • Rule of thumb: if your device isn't pretty much intuitively obvious to a slightly-dimmer-than-average member of your intended user-base, you have failed.  That varies depending on the user base of course.... But if your user needs a written manual, you have probably failed, except for minor features of highly complex gear where users are expecting to spend hours learning the device anyway. 
Details: Of course, other things are involved... for example:
  • Size:  if the device is tiny, that MIGHT exclude some UI options.  However, it might not! A little touchscreen might be at least as space-efficient as a few dedicated buttons and a numeric LCD.
  • Wasted Power.  Only really an issue for a battery-powered device where you are counting milliamps.  There are ways to change power levels on microcontrollers during times when the display doesn't change, backlight power can be used intelligently, and so on -- but there certainly are circumstances where power consumption is an issue... a device that runs out of battery power way faster than its user expected is probably even more crap than one that is difficult to use.
  • Environment:  if the device needs to be usable underwater or in conditions that must remain pitch dark, or be used while driving a car, etc, that obviously limits the ways its user can interact with it...
Distaste:  Maybe you just personally don't like all this modern garbage.  I am actually not going to ridicule this reason.... there are others like you who will appreciate your device, and nobody should spend their limited time on things they hate.  And older technology of course was often awesome!  I have one of those old Tektonix analog oscilloscopes and not only is its UI a brilliant piece of design, but the feel of using it is very pleasant.  If you are going to let passions like this drive your design choices, it is still important -- maybe even more important -- to avoid compromising that passion.
  • Intuitive design is still important... and probably even more difficult.   now the important thing is laying out the buttons and knobs, providing feedback with LEDs and meters, labeling things clearly on your panel, etc.  Do it right!
  • Use quality components.  Crappy buttons and gnarly knobs aren't old-school chic, they are just crap to use, and always were!
Of course, usability isn't just about which bits of hardware you decide to use.  Whatever interface you choose, it will be a bunch of work to get it right.  If you aren't willing to do that work, then admit that you are playing.  Communicate what you learned, share your experience experimenting -- but don't be confused: you haven't made a product and the world doesn't really need another frustrating device ("open source" or not).

</rant>

:-)

Programmable Load 1: Concept, Motivation, Goals

As I think about various projects, I am realizing that what I really need most is some more test gear to help me discover and diagnose problems, which will inevitably occur.  I have a multimeter and a cheap Chinese digital oscilloscope (as well as an ancient rather limited analog scope), and a pretty nice RCL meter, but I feel as if I would become more effective if I had some more tools.

One useful tool that should be a reasonably simple project: a programmable load, to help test power supplies (both the lab supply I wrote about the other day and also the sort of routine power supplies that virtually every project needs.  I think this will be a good "first homebaked test gear" project to see through to completion, so it's time to begin!

The idea, of course, is to load down Circuits Under Test with some programmed current value so their behavior can be examined for proper operation.  For the most part, the most useful currents will be of "medium" size... from tens of milliamps to maybe an amp.  Additionally, I would like to be able to fairly accurately program in much lighter loads, down to under a milliamp, in order to:
  • Check the behavior of supplies in a "barely loaded" situation
  • Characterize the behavior of circuit elements besides power supplies in normal sorts of situations.
It would be handy (because of my interest in motor control) to have a much beefier programmable load, but I will leave that for another day so I don't get bogged down in the difficulties of high-power circuitry.

Additionally, there's a few other features I want:
  1. The ability to test transient response, from sudden increases or decreases in current demands.
  2. The ability to test a supply's behavior in response to "pulsed" current demands.
Those extra features shouldn't cause too much difficulty (at least I don't think they will), but they are more complicated to set up -- which means that a simple "one knob" user interface won't do the trick.

When that happens (which is most of the time, I think!), I really want to make sure to do a good job with the interface (I will add a separate rant about that subject when I'm done writing this).

It seems as if most of the projects I want to work on want some sort of display that is more than just a number.  And it also happens that I am interested in display devices.  So, part of this project will be to bite the bullet and grow stronger by having a good set of skills, methods, and tools for using displays in projects!