Manufacturing guy-at-large.

Being in New York

Added on by Spencer Wright.

A few days ago I went to hear Ron Conway, Fred Wilson, and Michael Bloomberg speak about investing, entrepreneurship, and civic engagement. At a few moments during the event, the conversation turned to the current state of business (specifically startups) in New York City, a topic I've been thinking a lot about recently.

I feel passionately about working in New York. So much good work - across such a wide range of disciplines - has been and is being done here. And between the vast feeling of cross-pollination, and the fact that people come here specifically to do stuff of historic proportions - to make the most of their lives - is unlike anywhere else I've ever been (outside of urban China).

On a daily basis I look up and feel these pangs of energy, and wonder, and appreciation. I feel it talking to the Burmese cab driver bringing me back down North Conduit Ave from JFK. I feel it walking off the A/C train, and out through the old AT&T Long Lines headquarters, and onto Canal St and the morning in lower Manhattan. I feel it when I'm walking my dog around Bed Stuy at night and look up, through streetlights dappled by sycamores, to nod at someone smoking a joint on their stoop.

And I feel it in my work. As Bloomberg said this evening: If you want to make a business that serves the world, you need to go where the world is. And I believe that it is here more than anywhere that the many aspects of human life and work coexist best.

NYC has proven time and again that it's capable of spinning up and maturing fully fledged industries. And while many cities tend to go from one primary industry to another with little overlap, NYC somehow manages to grow and sustain many world-class operations at once. This is perhaps the most powerful part of working here: the ability to cross from industry to industry on a daily basis, and to develop long term relationships with people operating in totally different time scales.

I'm enriched by it. It's a world class place to work, and there's no better city to spend your life in.

Teardown: Nerf N-Strike Jolt Blaster

Added on by Spencer Wright.

Last week I led a bunch of NYC hardware folks through a design for manufacturing exercise in which we tore down inexpensive consumer hardware and tried to understand how they had been engineered for manufacturability. It was fun seeing a range of things be taken apart, and I wanted to do the exercise myself here.

I chose my favorite product of the night: A Nerf N-Strike Jolt Blaster, sold on Amazon for a whopping $5.99. 

Note that I discarded the packaging before taking my camera out. It was very simple - a piece of printed cardboard, a thermoformed plastic sheet, and two pieces of clear tape.

The blaster (I guess I'll use "blaster" here instead of "gun," though it seems a bit silly) comes with two darts. I took those apart first. They're made of two parts: a piece of cut-to-length blue foam tubing and a piece of molded orange and white rubber. They're glued together, probably with cyanoacrylate aka crazy glue - everything in the blaster seemed to be glued together with CA.

Next I removed the four screws at the base of the handle. These were the only screws in the entire product, and they're installed directly into the molded plastic body so no nuts are needed.

Next I removed the two rubber parts on the plunger, which had a light coating of lubricant on it. First there was an o-ring, and then there was a molded button-shaped part which was installed underneath a rivet.

With the rubber parts off, I pried the rivet (which had a barbed shaft and was pressed into the end of the orange plunger handle) out of the assembly. 

Next I removed three orange parts off of the barrel of the blaster. These appeared to be completely cosmetic.

Next I removed the blue plastic cap off of the back of the blaster. This has little false screws (colored blue as well), and was glued into the blaster body pretty securely. Behind it was a light gauge spring and the dart drive mechanism itself.

Lastly, I pressed the trigger pivot pin out of the blaster's body. I used the cap from a small brass container I made a few years ago to hold the blaster off of the vise jaw, and a torx driver bit to push the pin through the blaster body.

Here's the entire product disassembled:

The whole blaster has 24 individual parts, plus packaging. The full BOM would have 21 parts on it, plus cyanoacrylate glue and two pieces of tape. It's possible that the screws and trigger pin come off the shelf (and conceivable that the o-ring and possibly the springs do too, though I suspect they're custom), but everything else would require a significant amount of custom tooling. I count about 25 individual assembly steps required to put the whole product together. Oh - and a few of the parts are painted, too.

All of this costs $5.99.

I think this is pretty incredible.

Element Free

Added on by Spencer Wright.

When I joined nTopology, our flagship CAD software - Element - was in closed beta. I had used it myself over the fall, and was impressed at how quick and easy it was to generate variable lattice structures. But the GUI was often confusing and many of the core functions were still very much prototypes.

Today, I'm proud to announce that nTopology has released its first public product - Element Free. We've spent a ton of time on this over the past four months, and have both streamlined the workflow and improved the core design tools needed to design and edit complex lattice structures. 

We'll be working hard to integrate more features into Element Free over the coming months - and will be releasing a Pro version this summer. Head over to the nTopology Product page to download the software yourself!

Coherence

Added on by Spencer Wright.

This photo is of a conference table in Alcoa's headquarters:

If it's not clear, this little inlay is made from aluminum. Which is pretty rad, considering that Alcoa is an aluminum company.

As a project manager, I'm all about choosing the right tool for the job. But I'm also all about using the tools that are available to me in the most effective ways. This table didn't need an aluminum inlay, but the aluminum inlay that Alcoa gave it worked pretty damn well.

Reflections on three weeks of speaking

Added on by Spencer Wright.

I've given versions of the same talk three times over the past three weeks, and wanted to take a moment to note (mostly for myself) some observations I've had about both my own presentation and public speaking in general. 

First, I'm pleasantly surprised at how little nervousness I've felt. I've done a bit of public speaking in the past year or two, and in former lives have held jobs that required me to do somewhat better than commanding a room, but the past month's events have been less personal and had a higher chance of impacting my career - and still I've gone into them feeling more or less comfortable. Certainly some portion of this is my familiarity with the subject matter (my talk is not entirely a review of things I've written about on my blog, but there's a lot of overlap), but I dare say that I might also be growing into myself a bit. I recognize that this is kind of a weird thing to say of oneself, but I'm pretty sure it's at least partially true.

I think some part of my degree of comfort has to do with the fact that I've found a way of balancing my own deeply held philosophy with the fact that I'm selling something that speaks to that philosophy. This has been a long time coming, and probably deserves more than I can grant it here, so I'll leave it at that and move on.

I will note, however, that the entire experience of speaking at an event is noticeably more exhausting than simply attending. I suppose this is self evident, but presenting your work & thoughts is de facto an invitation for people to ask questions of you (and present their own work & thoughts one-on-one), and responding to that attention takes considerably energy. That's not to say that I don't enjoy it; indeed, eliciting a response is the primary reason to speak publicly in the first place. But it drains me a bit too - and I'll admit that I still haven't followed up on all of the business cards I've collected this month.

Lastly: I've also seen quite a few other folks speak publicly over the past month (conferences are conferences, after all), and I can't help but wonder what I would think of my own talk. If anyone out there has seen me speak recently and has feedback, send it along :)

Seatposts assembled

Added on by Spencer Wright.

Before I send these three seatposts out for testing, a quick update:

The seatpost heads (which I wrote a detailed post on a few months ago) are now glued to carbon fiber posts. I also added a thin carbon fiber disc to the top of each of the posts, so that water can't get into the bike's seat tube. The whole thing was assembled using 3M DP420 epoxy.

These are headed back to EFBE this week, where they'll go through the same ISO test as my seatmast topper was subjected to. More details soon!

Point modifiers

Added on by Spencer Wright.

Just a quick update to yesterday's post - here are some screenshots showing a little bit of how I'm controlling thickness on my lattice stem.

Our variable thickening algorithm allows the user to input minimum and maximum beam diameters. If a beam isn't within the range of any point modifiers, then it's thickened to the minimum value. If it's within range, then its thickness is determined by the falloff curve of the modifier that it's within range of. If it's within range of multiple point modifiers, then the greater thickness value is used.

As you can see above, the Modifier Editor allows the user to preview the effect that the modifiers will have on a part; blue means that a region is not within range of a modifier (and will be the minimum thickness), and red means that it's within range (and the maximum thickness will be applied). We allow you to preview this on any mesh in your project. Here I'm looking at a variably thickened lattice, but generally I'd start with a uniform thickness lattice and then play around from there.

The big change in the design yesterday was adding point modifiers in four locations: On either side of the handle bar clamp, and on the top and bottom of the steerer clamp. These modifiers have steep cosine falloff curves, meaning that they have a big effect on a relatively small region of the part. I've controlled the range and falloff so that just the beams on the edges of those surfaces are affected.

I also have point modifiers at all of the bolt holes, and a few that control thickness on the rest of the clamp surfaces, and then two point modifiers that make the transition from the clamp surfaces to the center of the extension a bit more gradual.

We've been thinking a bit more about how to develop modifiers in the future - stay tuned!

Stem update

Added on by Spencer Wright.

A friend asked me yesterday what was going on with my lattice bike stem design, and after telling him that it's been on the back burner I played with it a bit and made some real (if subtle) improvements. 

First, I should note here that I'm *not* worrying about overhanging faces. That's mostly because I'm working at nTopology to break down manufacturability of lattices into its component parts, and am tabling all of my DFM concerns until I have real data to back them up. In addition, I'm focusing on using variable thickening to maximum effect right now. I've used variable thickening a lot in the past, but the next software update of nTopology Element pushes it even more into the forefront, and I want to dogfood myself a little before we release it into the public :)

I don't have screenshots of the whole process, but this part was designed in much the same method that I was using last fall. I used Inventor to make a design space, and Meshmixer to generate surfaces to grow a lattice on. Then I used Element to:

  1. Create a surface lattice with beams at every edge in the Meshmixer model
  2. Create a volumetric lattice (based on a hex prism cell shape) inside the part
  3. Merge the two lattices by snapping nodes on the volumetric lattice to nearby nodes on the surface lattice
  4. Creating attractor modifiers at locations that I know I'll need more thickness in my lattice, e.g. mechanical features
  5. Applying variable thickness to the lattice based on those modifiers
  6. Refining the resulting mesh & reintroducing mechanical features via Booleans

The trickiest thing by far here is setting the attractor modifiers to the right range & falloff. I've got three things going on here:

  • Bolt holes. These need to be maximum thickness (1.5mm) to accept threads and distribute the load from the bolts.
  • Clamp surfaces. Where the stem clamps to the steer tube and handlebar, the part needs to have relatively high surface area. All lattice beams should lay on the surface itself, and thickness should be high as well.
  • Mechanical stress. I haven't done a full analysis of this part, but in general stress will be concentrated near the clamp surfaces and will be lower in the middle of the part.

Clearly this blog post would be more effective if I ran through every attractor one-by-one and explained how editing them changed the resulting structure, but we'll have to forego that for now. Suffice it to say that the part above weighs 105g and has roughly the mass distribution I was looking for; I'll update with more details soon :)

Termites, not tornadoes

Added on by Spencer Wright.

Again, from The Mythical Man-Month - emphasis mine:

When one hears of disastrous schedule slippage in a project, he imagines that a series of major calamities must have befallen it. Usually, however, the disaster is due to termites, not tornadoes; and the schedule has slipped imperceptibly but inexorably. Indeed, major calamities are easier to handle; one responds with major force, radical reorganization, the invention of new approaches. The whole team rises to the occasion. 

But the day-by-day slippage is harder to recognize, harder to prevent, harder to make up. Yesterday a key man was sick, and a meeting couldn't be held. Today the machines are all down, because lightning struck the building's power transformer. Tomorrow the disk routines won't start testing, because the first disk is a week late from the factory. Snow, jury duty, family problems, emergency meetings with customers, executive audits — the list goes on and on. Each one only postpones some activity by a half-day or a day. And the schedule slips, one day at a time.

No small slips

Added on by Spencer Wright.

Also from The Mythical Man-Month:

Let us consider an example. Suppose a task is estimated at 12 man-months and assigned to three men for four months, and that there are measurable mileposts A, B, C, D, which are scheduled to fall at the end of each month. Now suppose the first milepost is not reached until two months have elapsed. What are the alternatives facing the manager?
...
3. Reschedule. I like the advice given by P. Fagg, an experienced hardware engineer, "Take no small slips." That is, allow enough time in the new schedule to ensure that the work can be carefully and thoroughly done, and that rescheduling will not have to be done again.

If you need to reschedule, take responsibility - and make sure you only need to reschedule once. 

This book is good.

Conceptual Integrity

Added on by Spencer Wright.

I'm reading The Mythical Man-Month, and this section jumped out at me hard. Emphasis is mine:

Most European cathedrals show differences in plan or architectural style between parts built in different generations by different builders. The later builders were tempted to ''improve'' upon the designs of the earlier ones, to reflect both changes in fashion and differences in individual taste. So the peaceful Norman transept abuts and contradicts the soaring Gothic nave, and the result proclaims the pridefulness of the builders as much as the glory of
God.

Against these, the architectural unity of Reims stands in glorious contrast. The joy that stirs the beholder comes as much from the integrity of the design as from any particular excellences. As the guidebook tells, this integrity was achieved by the self-abnegation of eight generations of builders, each of whom sacrificed some of his ideas so that the whole might be of pure design. The result proclaims not only the glory of God, but also His power to salvage fallen men from their pride.

Even though they have not taken centuries to build, most programming systems reflect conceptual disunity far worse than that of cathedrals. Usually this arises not from a serial succession of master designers, but from the separation of design into many tasks done by many men.

I will contend that conceptual integrity is the most important consideration in system design. It is better to have a system omit certain anomalous features and improvements, but to reflect one set of design ideas, than to have one that contains many good but independent and uncoordinated ideas.

It's not that I don't like Chartres; indeed, there's something incredible about projects that outlive their original intent. But when it comes to the most compelling products & systems in my life, I find conceptual integrity to be a *really* powerful force.

See also: the last section in One type of swing, etc.

New TPR designs/drawings

Added on by Spencer Wright.

Made some updates to the models for The Public Radio this weekend. Included:

  • Made a full assembly model of the antenna. I had never done this previously, instead opting to let our suppliers make drawings. No more of that.
  • Fully updated our speaker model to allow for easier mechanical assembly and thru hole mounting to the PCB. This has been in the works for a while, but I needed to remodel the basket fully - and rethink the way that the lid screws work. I also renamed the speaker "Ground up speaker." You know, because of the fact that we're redesigning it from the ground up.
  • Added PEM nuts to the assembly (it was hex nuts before). I also adjusted the full screw stack so that it's fully supported throughout the assembly.
  • Remodeled the knob to be metric. ISO FTW! (Also note that the drawings are all on A4 paper :)
  • Did some basic housekeeping on the model, renaming and reorganizing elements to make maintenance easier.

I also did a bit of work to the EagleCAD - mostly just updating the speaker hole locations & sizes. Zach has done a bunch more work on this over the past few months; I'm mostly just dealing with mechanical interfaces here.

More on this soon, I hope :)

Exploration and explanation

Added on by Spencer Wright.

Apropos of Displaced in space or time, and just generally along the lines of what I spend a *lot* of time thinking about these days, a few thoughts on Michael Nielsen's recent post titled Toward an exploratory medium for mathematics. Note that my comments are largely placed in the field of CAD, while Nielsen is talking about math; hopefully the result isn't overly confusing.

Nielsen begins by separating out exploration from explanation:

Many experimental cognitive media are intended as explanations... By contrast, the prototype medium we'll develop is intended as part of an open-ended environment for exploration and discovery. Of course, exploration and discovery is a very different process to explanation, and so requires a different kind of medium.

I've touched on the explanatory aspects of CAD in the past (see in particular Computer aided design), but I had never really considered the dichotomy between exploration and explanation in such stark terms. This is partly a result of the fact that most CAD software has documentation built right into it. I've spent a *lot* of time using CAD tools to document parts in both 2D (multi-view PDFs) and 3D (STEP, STL, etc), and have had long conversations with engineers who swear up and down that design tools that don't make documentation easy aren't worth the time of day. 

My inclination is to think that the future will be increasingly integrated - in other words, that the divide between exploration and explanation is antiquated. But perhaps it's more useful to consider the many ways that (multifunctional CAD systems notwithstanding) these two aspects of engineering really have very little overlap. After all, my own CAD software has distinctly different interfaces for the two activities, and the way that I interact with the design interface is very different from the way my manufacturing partners will interact with my design explanations. Perhaps these activities could split even further; I see no a priori reason that this would be harmful at all.

Anyway, onward. Again, Nielsen - now talking specifically about the exploratory side of mathematics:

What we'd ideally like is a medium supporting what we will call semi-concrete reasoning. It would simultaneously provide: (1) the ability to compute concretely, to apply constraints, and to make inferences, i.e., all the benefits we expect a digital computer to apply... and (2) the benefits of paper-and-pencil, notably the flexibility to explore and make inferences about impossible worlds. As we've seen, there is tension between these two requirements. Yet is is highly desirable that both be satisfied simultaneously if we are to build a powerful exploratory medium for doing mathematics. That is true not just in the medium I have described, but in any exploratory medium.

I'll just pause here to say that this idea of "semi-concrete reasoning" is fantastic. Humans are quite capable of holding conflicting values at the same time; if computers are to be our partners in design, they'll need to do some analog of the same.

Instead of using our medium's data model to represent mathematical reality, we can instead use the medium's data model to represent the user's current state of mathematical knowledge. This makes sense, since in an exploratory medium we are not trying to describe what is true – by assumption, we don't know that, and are trying to figure it out – but rather what the user currently knows, and how to best support further inference.

Having adopted this point of view, user interface operations correspond to changes in the user's state of mathematical knowledge, and thus also make changes in the medium's model of that state. There is no problem with inconsistency, because the medium's job is only to model the user's current state of knowledge, and that state of knowledge may well be inconsistent. In a sense, we're actually asking the computer to do less, at least in some ways, by ignoring constraints. And that makes for a more powerful medium.

On this point, I agree that inconsistency itself isn't an issue at all - so long as it's made explicit to the user at all times. If a design fails to meet my needs for, say, manufacturability, then I should have some way of knowing that immediately - whether or not I choose to deal with it now or ever. Again, Nielsen:

Ideally, an exploratory medium would help the user make inferences, give the user control over how these inferences are made, and make it easy for the user to understand and track the chain of reasoning.

Yes.

Using the medium to support only a single stage of inference has several benefits. It naturally makes the chain of inference legible, since it mirrors the way we do inference with paper-and-pencil, every step made explicit, while nonetheless reducing tedious computational work, and helping the user understand what inferences are possible. It's also natural psychologically, since the user is already thinking in terms of these relationships, having defined the objects this way. Finally, and perhaps most importantly, it limits the scope of the interface design problem, since we need not design separate interfaces for the unlimited(!) number of possible inferences. Rather, for every interface operation generating a mathematical object, we need to design a corresponding interface to propagate changes. That's a challenging but finite design problem. Indeed, in the worst case, a “completely manual” interface like that presented earlier may in general be used.

With that said, one could imagine media which perform multiple stages of inference in a single step, such as our medium modifying ss in response to changes in the tangent. Designing such a medium would be much more challenging, since potentially many more relationships are involved (meaning more interfaces need to be exposed to the user), and it is also substantially harder to make the chain of reasoning legible to the user.

Even with the simplification of doing single-step inference, there are still many challenging design problems to be solved. Most obviously, we've left open the problem of designing interfaces to support these single stages of inference. In general, solving this interface design problem is an open-ended empirical and psychological question. It's an empirical question insofar as different modes of inference may be useful in different mathematical proofs. And it is a psychological question, insofar as different interfaces may be more or less natural for the user. Every kind of relationship possible in the medium will require its own interface, and thus present a new design challenge. The simplest way to meet that challenge is to use a default-to-manual-editing strategy, mirroring paper-and-pencil.

I recognize that this is a somewhat long quote, but I think it's really critical. To paraphrase: Designing a UI that allows for multidimensional problems is *hard,* and it's hard for human users to glean actionable information from multidimensional data. 

Instead, we should break UIs up into discrete steps, allowing users to visualize and understand relationships piecewise. This means more individual UI modalities need to be designed, but by defaulting to manual editing strategies - which are damn good (viz. paper and pencil) to start with - even that task becomes manageable.

There's a lot here; I recommend reading the original post in its entirety. 

Don't let anyone add any features

Added on by Spencer Wright.

Just a quick note:

I can't tell you how many times over the past year I've congratulated Zach and myself, in retrospect, for pulling off The Public Radio like we did. Specifically, that we didn't listen to *anyone* who asked us for new features.

We sold an FM radio in a mason jar, and we packaged it in kraft paper and a brown Uline box. People had asked for rechargeable batteries, and solar charging, and a headphone jack, and a multi-station option, and all other manner of things. We also considered retail packaging, and replacing our potentiometer with a rotary encoder, and (if we go way back) using a custom CNCd enclosure for the radio.

I really, really, can't emphasize this enough: The fact that we ignored our own urges, and politely told everyone else that what they were asking for was "on our backlog," is the only reason that we were able to deliver The Public Radio anything close to on time. 

Delivering a product is *hard,* and you don't get any bonus points for having a CNCd enclosure. Seriously. Don't let anyone add any features.

Allen on science, engineering, and modes of information transfer

Added on by Spencer Wright.

Over the past week I've been reading Thomas J. Allen's Managing the Flow of Technology, which summarizes about a decade of MIT Sloan research into how R&D organizations acquire and transmit knowledge. A number of passages have jumped out to me, and I wanted to comment on them here. Emphasis is mine throughout.

The distinction between science and engineering is key to this book. On page 3:

The scientist's principal goal is a published paper. The technologist's goal is to produce some physical change in the world. This difference in orientation, and the subsequent difference in the nature of the products of the two, has profound implications for those concerned with supplying information to either of the two activities.

And on page 5:

...whereas the provision of information in science involves the gathering, organizing, and distribution of publications, the situation in technology is very different. The technologist must obtain his information either through the very difficult task of decoding and translating physically encoded information or by relying upon direct personal contact and communication with other technologists. His reliance upon the written word will be much less than that of the scientist. 

Starting on page 39:

THE NATURE OF TECHNOLOGY
The differences between science and technology lie not only in the kinds of people who are attracted to them; they are basic to the nature of the activities themselves. Both science and technology develop in a cumulative manner, with each new advance building upon and being a product of vast quantities of work that have gone before. In science all of the work up to any point can be found permanently recorded in literature, which serves as a repository for all scientific knowledge. The cumulative nature of science can be demonstrated quite clearly (Price, 1965a, 1970) by the way in which citations among scientific journal articles cluster and form a regular pattern of development over time.
A journal system has been developed in most technologies that in many ways emulates the system originally developed by scientists; yet the literature published in the majority of these journals lack, as Price (1965a, 1970) has shown, one of the fundamental characteristics of the scientific literature: it does not cumulate or build upon itself as does the scientific literature. Citations to previous papers or patents are fewer and are most often to the author's own work. Publication occupies a position of less importance than it does in science where it serves to document the end product and establish priority. Because published information is at best secondary to the actual utilization of the technical innovation, this archival is not as essential to ensure the technologist that he is properly credited by future generations. The names of Wilbur and Orville Wright are not remembered because they published papers. As pointed out in chapter 1, the technologist's principal legacy to posterity is encoded in physical, not verbal, structure. Consequently the technologist publishes less and devotes less time to reading than do scientists.
Information is transferred in technology primarily through personal contact. Even in this, however, the technologist differs markedly from the scientist. Scientists working at the frontier of a particular specialty know each other and associate together in what Derek Price has called "invisible colleges." They keep track of one another's work through visits, seminars, and small invitational conferences, supplemented by an informal exchange of written material long before it reaches archival publication. Technologists, on the other hand, keep abreast of their field by close association with co-workers in their own organization. They are limited in forming invisible colleges by the imposition of organizational barriers.

I'll pause here to note that this bothers me somewhat. I enjoy few things more than learning from other people, especially if they inhabit different worlds than I do. Allen continues:

BUREAUCRATIC ORGANIZATION
Unlike scientists, the vast majority of technologists are employed by organization with a well-defined mission (profit, national defense, space exploration, pollution abatement, and so forth). Mission-oriented organizations necessarily demand of their technologists a degree of identification unknown in most scientific circles. This organizational identification works in two ways to exclude the technologist from informal communication channels outside his organization. First, he is inhibited by the requirements that he work only on problems that are of interest to his employer, and second, he must refrain from early disclosure of the results of his research in order to maintain his employer's advantage over competitors. Both of these constraints violate the rather strong scientific norms that underlie and form the basis of the invisible college. The first of these norms demands that science be free to choose its own problems and that the community of colleagues be the only judges of the relative importance of possible areas of investigation, and the second is that the substantive findings of research are to be fully assigned and communicated to the entire research community. The industrial organization, by preventing its employers from adhering to these two norms, impedes the formation by technologists of anything resembling an invisible college.

Incidentally, I believe that companies lose more by inhibiting cross pollination than they gain by protecting their competitive position. It would appear that Allen would agree, at least to an extent. On page 42:

The Effect of Turnover
It is this author's suspicion that much of the proprietary protectionism in industry is far overplayed. Despite all of the organizational efforts to prevent it, the state of the art in technology propagates quite rapidly. Either there are too many martinis consumed at engineering conventions or some other mechanism is at work. This other mechanism may well be the itinerant engineer, who passes through quite a number of organizations over the course of a career...
Each time that an engineer leaves an employer, voluntarily or otherwise, he carries some knowledge of the employer's operations, experience, and current technology with him. We are gradually coming to realize that human beings are the most effective carriers of information and that the best way to transfer information between organizations or social systems is to physically transfer a human carrier. Roberts' studies (Roberts and Wainer, 1967) marshal impressive evidence for the effective transfer of space technology from quasi-academic institutions to the industrial sector and eventually to commercial applications in those instances in which technologists left university laboratories to establish their own businesses. This finding is especially impressive in view of the general failure to find evidence of successful transfer of space technology by any other mechanism, despite the fact that many techniques have been tried and a substantial amount of money has been invested in promoting the transfer.
This certainly makes sense. Ideas have no real existence outside of the minds of men. Ideas can be represented in verbal or graphic form, but such representation is necessarily incomplete and cannot be easily structured to fit new situations. The human brain has a capacity for flexibly restructuring information in a manner that has never been approached by even the most sophisticated computer programs. [Just jumping in here to say bravo. -SW] For truly effective transfer of technical information, we must make use of this human ability to recode and restructure information so that it fits into new contexts and situations. Consequently, the best way to transfer technical information is to move a human carrier. The high turnover among engineers results in a heavy migration from organization to organization and is therefore a very effective mechanism for disseminating technology throughout an industry and often to other industries. Every time an engineer changes jobs he brings with him a record of his experiences on the former job and a great amount of what his former organization considers "proprietary" information. Now, of course, the information is usually quite perishable, and its value decays rapidly with time. But a continual flow of engineers among the firms of an industry ensures that no single firm is very far behind in knowledge of what its competitors are doing. So the mere existence of high turnover among R&D personnel vitiates much of the protectionism accorded proprietary information.
As for turnover itself, it is well known that most organizations attempt to minimize it. If all of the above is even partially true, a low level of turnover could be seriously damaging to the interests of the organization. Actually, however, quite the opposite is true. A certain amount of turnover may be not only desirable but absolutely essential to the survival of a technical organization, although just what the optimum turnover level is for an organization is a question that remains to be answered. It will vary from one situation to the next and is highly dependent upon the rate at which the organization's technical staff is growing. After all, it is the influx of new engineers that is most beneficial to the organization, not the exodus of old ones. When growth rate is high, turnover can be low. An organization that is not growing should welcome or encourage turnover. The Engineers' Joint Council figure of 12 percent may even be below the optimum for some organizations. Despite the costs of hiring and processing new personnel, an organization might desire an even higher level of turnover. Although it is impossible to place a price tag on the new state-of-the-art information that is brought in by new employees, it may very well more than counterbalance the costs of hiring. This would be true at least to the point where turnover becomes disruptive to the morale and functioning of the organization. 

Allen also discusses the degree two which academia influences technology development. On page 51:

Project Hindsight was the first of a series of attempts to trace technological advances back to their scientific origins. Within the twenty-year horizon of its backward search, Hindsight was able to find very little contribution from basic science (Sherwin and Isenson, 1967). In most cases, the trail ran cold before reaching any activity that could be considered basic research. In Isenson's words, "It would appear that most advances in the technological state of the art are based on no more recent advances than Ohm's Law or Maxwell's equations."

On page 52:

In yet another recent study, Langrish found little support for a strong science-technology interaction. Langrish wisely avoided the problem of differentiating science from technology. He categorized research by the type of institution in which it was conducted - industry, university, or government establishment. In tracing eighty-four award-winning innovations to their origins, he found that "the role of university as a source of ideas for [industrial] innovation is fairly small" (Langrish, 1971) and that "university science and industrial technology are two quite separate activities which occasionally come into contact with each other" (Langrish, 1969). He argued very strongly that most university basic research is totally irrelevant to societal needs and can be only partially justified for its contributions through training of students.

That's tough stuff, if you ask me. Incidentally, I've considered many times recently whether I myself would go to college if I was just graduating high school today. It would not be a straightforward choice.

Then Allen turned to the qualities of the things that engineers actually read. On page 70:

A MORE DETAILED EXAMINATION OF WRITTEN MEDIA
Looking first at the identity of the publications that were read, there are two major categories of publications that engineers use. The first of these might be called formal literature. It comprises books, professional journals, trade publications, and other media that are normally available to the public and have few, if any, restrictions on their distribution. Informal publications, on the other hand, are published by organizations usually for their own internal use; they often contain proprietary material and for that reason are given a very limited distribution. On the average, engineers divide their attention between the two media on about an equal basis, only slightly favoring the informal publications (table 4.3). Because engineering reports are usually much longer than journal articles and because books are used only very briefly for quite specific purposes, each instance of report reading takes twice as long as an instance of journal or book reading. The net result is a threefold greater expenditure of time on informal reports. We can conclude from this brief overview that the unpublished engineering report occupies a position that is at least as important as that of the book or journal in the average engineer's reading portfolio.

Here I should note that I read this through the lens of someone whose public blog is essentially an ongoing and highly detailed series of informal reports. I'm certainly no scientist, and in general my writing isn't particularly academic. I'm doing decidedly applied work, and I document it (including what most companies would call proprietary information about my products and the results of my research) for anyone to read and repurpose as they please. 

Allen continues, explaining why engineering journals aren't really used by practicing engineers. On page 73:

The publications of the professional engineering societies in all of these diverse fields are little used by their intended audience.
Why should this be so? The answer is not difficult to find. Most professional engineering journals are utterly incomprehensible to the average engineer. They often rely heavily upon mathematical presentations, which can be understood by only a limited audience. The average engineer has been away from the university for a number of years and has usually allowed his mathematical skills to degenerate. Even if he understood the mathematics at one time, it is unlikely that he can now. The articles, even in engineering society journals, are written for a very limited audience, usually those few at the very forefront of a technology. Just as in science, the goal of the author is not to communicate to the outsider but to gain for himself the recognition of his peers.

It's funny: the purpose of this blog is to communicate with outsiders AND gain the recognition of my peers. I'd like to think, in fact, that it fits the description of the ideal engineering literature that Allen puts forth on page 75:

The professional societies could publish a literature form whose technical content is high, but which is understandable by the audience to whom it is directed...The task is not an impossible one. Engineers will read journals when these journals are written in a form and style that they can comprehend. Furthermore, technological information can be provided in this form. Why then do the professional societies continue to publish material that only a small minority of their membership can use? If this information can be provided in a form that the average engineer can understand, why haven't the professional societies done so?
The obvious answer to these questions is that the societies have only recently become aware of the problem. In the past, they were almost totally ignorant of even the composition of their membership, and they still know littler of their information needs. Thus, they have never had the necessary information to formulate realistic goals or policy. Perhaps the most unfortunate circumstance that ever befell the engineering profession in the United States is that at the time when it first developed a self-awareness and began to form professional societies, it looked to the scientific societies, which had then existed for over 200 years, to determine their form and function.

Interestingly, though, I do not fit the description of the engineer that Allen gives on page 99:

THE IMPORTANCE OF COMMUNICATION WITHIN THE LABORATORY
Most engineers are employed by bureaucratic organizations. Academic scientists are not. The engineer sees the organization as controller of the only reward system of any real importance to him and patterns his behavior accordingly. While the academic scientist finds his principal reference group and feels a high proportion of his influence from outside the organization, for the engineer, the exogenous forces simply do not exist. The organization in which he is employed controls his pay, his promotions, and, to a very great extent, his prestige in the community.

To be clear, I get a ton out of working closely with people. I worked alone building bikes for a full three years, and was solo and very isolated during much of the two year construction project I completed after college; the lack of camaraderie in those situations was hard on me. I learned through that process that working with people - and having a mutual feeling of respect and enthusiasm - was incredibly important. I've gotten a ton out of all of the colleagues I've had since then - including many who I initially clashed with.

But exogenous forces in my life absolutely exist, and are important too. I benefit greatly from keeping contact with people elsewhere in my industry - and people outside of it - and I'm confident that the companies I've worked for have benefited from my network too.

My belief is that there's more room for these things to coexist than most companies realize. As evidence, I would present that when I began working on metal 3D printing, I knew nothing about it - and didn't work at a company that had any particular interest about it in the first place. I believe that it is only through my openness that I've gotten where I am today, and through that openness I've also vastly improved my access to experienced engineers across the industry. I've gotten cold emails from people working at some of the biggest and most advanced R&D organizations in the world, something I don't think would ever have happened had I not shared the way I did. And I'm confident that the my relationships with these people are mutually beneficial - both to us as people and to the companies who employ us.

I'm about a third through Managing the Flow of Technology now; I'll probably finish it in the next month. I recommend it.

EBM surface finishes and MMP

Added on by Spencer Wright.

When I visited MicroTek Finishing, a Cincinnati based precision finishing company, in late 2014, I was intent on printing my seatmast topper with laser powder bed fusion. DMLS's install base is relatively large, making it easy to source vendors and compare pricing. And while their surface finish and dimensional accuracy can leave something to be desired, DMLS parts can be put into service with minimal post processing.

But as I was saying goodbye to Tim Bell (my host at MicroTek) that afternoon, he planted a seed. I should try building my parts in EBM, he said - and see if MicroTek's MMP process could bring the rough parts up to a useable state.

That same day, I asked Dustin and Dave (both of whom I worked with on my seatmast topper) what they thought of the idea. Dave had extensive experience on an Arcam A2, and thought it was definitely worth trying out. Relative to DMLS, EBM is a quick process (for more details on Arcam and EBM, see the Gongkai AM user guide), and a big portion of the cost structure of metal AM parts is the amount of time they take to print. Furthermore, parts can often be stacked many layers high on EBM machines, allowing the fixed costs of running a build to be distributed over a larger number of parts. And while EBM parts do tend to be rough (and have larger minimum feature sizes than DMLS), they also tend to warp and distort less - making the manufacturing plan a bit simpler in that respect.

Shortly after that trip, I reached out to Addaero Manufacturing. I visited them soon after, and then asked if they'd be interested in exploring an EBM->MMP process chain. They were, and provided three identical parts to experiment on.

The part in question is the head of a seatpost assembly for high end road bikes. The part itself is small - about 70mm tall and with a 35mm square footprint. As built, it's just 32g of titanium 6/4. Add in a piece of carbon fiber tubing (88g for a 300mm length) and some rail clamp hardware (50g), and the entire seatpost assembly should be in the 175g range - on par with the lightest seatposts on the market today.

As a product manager who's ultimately optimizing for commercial viability, I had three questions going into this process:

  1. How do the costs of the different manufacturing process chains compare? 
  2. How do the resulting parts compare functionally, i.e. in destructive testing?
  3. Functionality being equal, how do the aesthetics (and hence desirability) of the parts compare?

I'll write more about the second point later; in this post, my primary aim is to introduce MMP and compare the different process chains from a financial and operational standpoint.

Basics of surface texture

As confirmed by a 1990 NIST report titled Surface Finish Metrology Tutorial, "there is a bewildering variety of techniques for measuring surface finish." Moreover, most measurement methods focus only on the primary texture - the roughness itself - and incorporate some method of controlling for waviness and form. From the same report:

The measured profile is a combination of the primary and secondary texture. These distinctions are useful but they are arbitrary in nature and hence, vary with manufacturing process. It has been shown, but not conclusively proven that the functional effects of form error, waviness and roughness are different. Therefore, it has become an accepted practice to exclude waviness before roughness is numerically assessed.

Surface finish is usually measured using the stylus technique:

The most common technique for measuring surface profile makes use of a sharp diamond stylus. The stylus is drawn over an irregular surface at a constant speed to obtain the variation in surface height with horizontal displacement.

The most common surface texture metric is Ra. (For a good, quick, technical description of the varieties of surface texture metrics, see this PDF from Accretech.) Ra measures the average deviation of the profile from the mean line (the related Rq also measures deviation from the mean line, but using a root mean square method), and is used across a variety of industries and manufacturing methods. But it's incapable of describing a number of important aspects of a part. For instance, it's critical (for both aestetic and functional reasons) that my parts have Rsk (skewness) values close to zero - meaning that their surfaces are free from flaws like pits and warts. In other words, I'd take a consistent, brushed surface over one that's highly polished but has a few deep cuts/pits.

I should note, of course, that surface finish is a result of the total manufacturing process chain. If the near net shape part (straight out of the EBM machine) is rough and pitted, then it'll be difficult to ever make it acceptable - and the methods required to do so will vary widely. 

MicroTek and MMP

MicroTek is just one in an international network of companies that perform MMP, which grew out of a Swiss company called BESTinCLASS Industries. The MMP process is closely guarded; neither MicroTek nor BiC disclose enough about the process to really understand how it works. From MicroTek's website:

MMP Technology is a mechanical-physical-catalyst surface treatment applied to items placed inside a processing tank.  MMP technology is truly different from traditional polishing processes because of the way it interacts with the surface being treated.
MMP Technology uses a mechanical cutting process at a very small scale (not an acid attack or any other process that could alter the part's metallurgical properties), meaning it can distinguish between micro-roughness and small features. The process actually maps the surface as a collection of frequencies of roughness, removing first the highest frequencies, then removing progressively lower frequencies.
Unlike other polishing processes, MMP Technology can stop at any point along the way, so now for the first time it is possible to selectively remove only the ranges of roughness that you don't want on the surface, giving you the option of leaving behind lower frequencies of roughness that could be beneficial to the function of the part.

To hear Tim and JT Stone tell it, MicroTek essentially does a Fourier transform on the topography of the part. They analyze the surface finish as the combination of many low and high frequency functions, and begin the MMP process by characterizing those different functions and identifying which ones to remove. Then, by selecting "an appropriate regimen of MMP Technology from the several hundred treatments available," they selectively remove the undesirable aspects of the surface finish - while still preserving the underlying form of the part.

This is worth highlighting: traditionally, polishing is a process whereby a part is eaten away by abrasive media. With each successive step, progressively smaller scratches are made in the part's surface. You're constantly cutting down the peaks of the part, and as a result the form gets smaller and smaller over time. With MMP, you have the flexibility to remove fine frequencies while keeping longer ones - maintaining the original intended shape.

The parts

Addaero printed three identical parts for me. I sent two to MicroTek. They processed one for fatigue resistance, and the other they "made BLING."

(Note that you can click on the photos above to see a larger version.)

MicroTek sent detailed inspection reports with the parts, and the picture they paint is fascinating. MMP reduced both Ra and Rq drastically, and Rt dropped significantly as well. Rsk is a bit of a different story, however: in one of the measurement locations ("Side of leg"), it dropped well into the negative range. You'll recall that the absolute value of skewness is really the issue here; a negative number (indicating pitting) is just as bad as a positive (indicating warts/spikes) one.

I've put the raw data in a Google Sheet, here; the full inspection reports are here and here. The charts below show most of the relevant information, broken down by the area of the part being tested. A helpful description of the part's areas ("V-neck face," etc) is here.

Ra - Roughness average

All values in μm

Rq - Roughness, root mean square

All values in μm

Rsk - Roughness skewness

All values in μm

Rt - Roughness total

All values in μm

MicroTek also sent a series of photos taken with a Hirox digital microscope at a variety of magnifications:

If it's not clear from all of the photos and charts above, the improvement on the parts due to MMP is really remarkable. The as-printed part is really rough - on average, it's about as rough (Ra/Rq) as 120 grit sandpaper (see this for a good analysis of sandpaper surface texture). MicroTek was able to eliminate the vast majority of the total and arithmetic mean roughness; both parts they processed feel very much like finished products.

The pitting, however, is a problem. To be clear, it's not a result of the MMP process; all they did was expose flaws that were already in the part. Many of these could probably be eliminated on future batches. First, printing the parts on a newer Arcam system (like the Q20) might improve the as-printed texture significantly. And second, MicroTek can investigate more complex treatments that allow for the offending frequencies to be eliminated more thoroughly. I'll be exploring these (and other) options in the coming months.

Assembly

Before putting the seatposts together, a little bit of prep was necessary. The inner diameter of both the seatpost and saddle clamp cylinders were slightly undersized, and there were warts (remnants of support structures) left on the undersides of the shoulder straps. I had intentionally left these untouched when I sent the parts to MicroTek, as I wanted to see how little post processing I could get away with. The MMP process took them down slightly, but not nearly enough to put the parts into service.

Fixing that was pretty straightforward - just a few minutes each with a file. In future iterations, I'm hoping that by making some light design modifications - and by dialing in the EBM build parameters - will minimize this work. If not, then I'll probably add CNC machining into my process chain (after printing and before finishing).

With the inner diameters trued up, the parts could be dry fitted to the carbon fiber tubing I'm using as a seatpost:

I'll be gluing the assemblies together with 3M DP420 this week, and then I'll send them out for testing. These parts will be put to the same ISO standard that my seatmast topper passed last summer, and I'm particularly curious to know whether the different levels of post processing has any effect on their strength. In high fatigue cycle applications (this paper defines "high fatigue cycle" as N>100,000, which is exactly what my parts will be tested to), improvements in surface finish (lower Ra) have been shown to increase fatigue life. If some form of surface finishing (MMP or otherwise) means that I can print a lighter AND stronger part, that'll definitely help justify the expense.

Cost

With my current design and a batch size of 275 (a full batch in an Arcam Q20) my as printed cost will be under $100. MMP will cost an additional $40-75 (depending on finish level), though those numbers were based on smaller quantities. I'd hope that the rollup cost to me is under $150.

In addition to these parts, a full seatpost requires about $25 worth of carbon fiber, a few dollars worth of glue, and (I suspect) under ten minutes of assembly time. They'll also require saddle rail hardware, which I'm budgeting an additional $25 for, and some packaging - under $10. All told, my cost of goods sold would be about $215.

That's a fancy seatpost, but it's not completely unreasonable. My goal, at the moment, is to get that price down to $150.

More updates soon :)


Thanks to Addaero and MicroTek for their ongoing help with this project.

Photos from NYIO's trip to the Hudson Yards project

Added on by Spencer Wright.

Last week, the New York Infrastructure Observatory was lucky enough to tour the Hudson Yards Redevelopment project - the largest private real estate development project in US History. From my announcement email:

I just want to reiterate that: This is 26+ acres of active rail yard, on which Related Companies and Oxford Properties are building over 12 million square feet of office, residential, and retail space, designed by Kohn Pedersen Fox. And the trains underneath (did I mention that all of this development is being built on a huge platform supported by columns?) will keep running throughout construction.

The Hudson Yards project will remake the a big part of the NYC skyline, and includes large changes to the infrastructure in the area. It's a once in a generation project, and it was *really* great to see it in person.

You can see my photos (with captions, if you click them) below. Gabe Ochoa also posted a bunch on his blog, which I recommend checking out too!

Hudson Yards from the new 7 train entrance

Also: You should really read The Power Broker.