Manufacturing guy-at-large.

New TPR designs/drawings

Added on by Spencer Wright.

Made some updates to the models for The Public Radio this weekend. Included:

  • Made a full assembly model of the antenna. I had never done this previously, instead opting to let our suppliers make drawings. No more of that.
  • Fully updated our speaker model to allow for easier mechanical assembly and thru hole mounting to the PCB. This has been in the works for a while, but I needed to remodel the basket fully - and rethink the way that the lid screws work. I also renamed the speaker "Ground up speaker." You know, because of the fact that we're redesigning it from the ground up.
  • Added PEM nuts to the assembly (it was hex nuts before). I also adjusted the full screw stack so that it's fully supported throughout the assembly.
  • Remodeled the knob to be metric. ISO FTW! (Also note that the drawings are all on A4 paper :)
  • Did some basic housekeeping on the model, renaming and reorganizing elements to make maintenance easier.

I also did a bit of work to the EagleCAD - mostly just updating the speaker hole locations & sizes. Zach has done a bunch more work on this over the past few months; I'm mostly just dealing with mechanical interfaces here.

More on this soon, I hope :)

Exploration and explanation

Added on by Spencer Wright.

Apropos of Displaced in space or time, and just generally along the lines of what I spend a *lot* of time thinking about these days, a few thoughts on Michael Nielsen's recent post titled Toward an exploratory medium for mathematics. Note that my comments are largely placed in the field of CAD, while Nielsen is talking about math; hopefully the result isn't overly confusing.

Nielsen begins by separating out exploration from explanation:

Many experimental cognitive media are intended as explanations... By contrast, the prototype medium we'll develop is intended as part of an open-ended environment for exploration and discovery. Of course, exploration and discovery is a very different process to explanation, and so requires a different kind of medium.

I've touched on the explanatory aspects of CAD in the past (see in particular Computer aided design), but I had never really considered the dichotomy between exploration and explanation in such stark terms. This is partly a result of the fact that most CAD software has documentation built right into it. I've spent a *lot* of time using CAD tools to document parts in both 2D (multi-view PDFs) and 3D (STEP, STL, etc), and have had long conversations with engineers who swear up and down that design tools that don't make documentation easy aren't worth the time of day. 

My inclination is to think that the future will be increasingly integrated - in other words, that the divide between exploration and explanation is antiquated. But perhaps it's more useful to consider the many ways that (multifunctional CAD systems notwithstanding) these two aspects of engineering really have very little overlap. After all, my own CAD software has distinctly different interfaces for the two activities, and the way that I interact with the design interface is very different from the way my manufacturing partners will interact with my design explanations. Perhaps these activities could split even further; I see no a priori reason that this would be harmful at all.

Anyway, onward. Again, Nielsen - now talking specifically about the exploratory side of mathematics:

What we'd ideally like is a medium supporting what we will call semi-concrete reasoning. It would simultaneously provide: (1) the ability to compute concretely, to apply constraints, and to make inferences, i.e., all the benefits we expect a digital computer to apply... and (2) the benefits of paper-and-pencil, notably the flexibility to explore and make inferences about impossible worlds. As we've seen, there is tension between these two requirements. Yet is is highly desirable that both be satisfied simultaneously if we are to build a powerful exploratory medium for doing mathematics. That is true not just in the medium I have described, but in any exploratory medium.

I'll just pause here to say that this idea of "semi-concrete reasoning" is fantastic. Humans are quite capable of holding conflicting values at the same time; if computers are to be our partners in design, they'll need to do some analog of the same.

Instead of using our medium's data model to represent mathematical reality, we can instead use the medium's data model to represent the user's current state of mathematical knowledge. This makes sense, since in an exploratory medium we are not trying to describe what is true – by assumption, we don't know that, and are trying to figure it out – but rather what the user currently knows, and how to best support further inference.

Having adopted this point of view, user interface operations correspond to changes in the user's state of mathematical knowledge, and thus also make changes in the medium's model of that state. There is no problem with inconsistency, because the medium's job is only to model the user's current state of knowledge, and that state of knowledge may well be inconsistent. In a sense, we're actually asking the computer to do less, at least in some ways, by ignoring constraints. And that makes for a more powerful medium.

On this point, I agree that inconsistency itself isn't an issue at all - so long as it's made explicit to the user at all times. If a design fails to meet my needs for, say, manufacturability, then I should have some way of knowing that immediately - whether or not I choose to deal with it now or ever. Again, Nielsen:

Ideally, an exploratory medium would help the user make inferences, give the user control over how these inferences are made, and make it easy for the user to understand and track the chain of reasoning.

Yes.

Using the medium to support only a single stage of inference has several benefits. It naturally makes the chain of inference legible, since it mirrors the way we do inference with paper-and-pencil, every step made explicit, while nonetheless reducing tedious computational work, and helping the user understand what inferences are possible. It's also natural psychologically, since the user is already thinking in terms of these relationships, having defined the objects this way. Finally, and perhaps most importantly, it limits the scope of the interface design problem, since we need not design separate interfaces for the unlimited(!) number of possible inferences. Rather, for every interface operation generating a mathematical object, we need to design a corresponding interface to propagate changes. That's a challenging but finite design problem. Indeed, in the worst case, a “completely manual” interface like that presented earlier may in general be used.

With that said, one could imagine media which perform multiple stages of inference in a single step, such as our medium modifying ss in response to changes in the tangent. Designing such a medium would be much more challenging, since potentially many more relationships are involved (meaning more interfaces need to be exposed to the user), and it is also substantially harder to make the chain of reasoning legible to the user.

Even with the simplification of doing single-step inference, there are still many challenging design problems to be solved. Most obviously, we've left open the problem of designing interfaces to support these single stages of inference. In general, solving this interface design problem is an open-ended empirical and psychological question. It's an empirical question insofar as different modes of inference may be useful in different mathematical proofs. And it is a psychological question, insofar as different interfaces may be more or less natural for the user. Every kind of relationship possible in the medium will require its own interface, and thus present a new design challenge. The simplest way to meet that challenge is to use a default-to-manual-editing strategy, mirroring paper-and-pencil.

I recognize that this is a somewhat long quote, but I think it's really critical. To paraphrase: Designing a UI that allows for multidimensional problems is *hard,* and it's hard for human users to glean actionable information from multidimensional data. 

Instead, we should break UIs up into discrete steps, allowing users to visualize and understand relationships piecewise. This means more individual UI modalities need to be designed, but by defaulting to manual editing strategies - which are damn good (viz. paper and pencil) to start with - even that task becomes manageable.

There's a lot here; I recommend reading the original post in its entirety. 

Don't let anyone add any features

Added on by Spencer Wright.

Just a quick note:

I can't tell you how many times over the past year I've congratulated Zach and myself, in retrospect, for pulling off The Public Radio like we did. Specifically, that we didn't listen to *anyone* who asked us for new features.

We sold an FM radio in a mason jar, and we packaged it in kraft paper and a brown Uline box. People had asked for rechargeable batteries, and solar charging, and a headphone jack, and a multi-station option, and all other manner of things. We also considered retail packaging, and replacing our potentiometer with a rotary encoder, and (if we go way back) using a custom CNCd enclosure for the radio.

I really, really, can't emphasize this enough: The fact that we ignored our own urges, and politely told everyone else that what they were asking for was "on our backlog," is the only reason that we were able to deliver The Public Radio anything close to on time. 

Delivering a product is *hard,* and you don't get any bonus points for having a CNCd enclosure. Seriously. Don't let anyone add any features.

Allen on science, engineering, and modes of information transfer

Added on by Spencer Wright.

Over the past week I've been reading Thomas J. Allen's Managing the Flow of Technology, which summarizes about a decade of MIT Sloan research into how R&D organizations acquire and transmit knowledge. A number of passages have jumped out to me, and I wanted to comment on them here. Emphasis is mine throughout.

The distinction between science and engineering is key to this book. On page 3:

The scientist's principal goal is a published paper. The technologist's goal is to produce some physical change in the world. This difference in orientation, and the subsequent difference in the nature of the products of the two, has profound implications for those concerned with supplying information to either of the two activities.

And on page 5:

...whereas the provision of information in science involves the gathering, organizing, and distribution of publications, the situation in technology is very different. The technologist must obtain his information either through the very difficult task of decoding and translating physically encoded information or by relying upon direct personal contact and communication with other technologists. His reliance upon the written word will be much less than that of the scientist. 

Starting on page 39:

THE NATURE OF TECHNOLOGY
The differences between science and technology lie not only in the kinds of people who are attracted to them; they are basic to the nature of the activities themselves. Both science and technology develop in a cumulative manner, with each new advance building upon and being a product of vast quantities of work that have gone before. In science all of the work up to any point can be found permanently recorded in literature, which serves as a repository for all scientific knowledge. The cumulative nature of science can be demonstrated quite clearly (Price, 1965a, 1970) by the way in which citations among scientific journal articles cluster and form a regular pattern of development over time.
A journal system has been developed in most technologies that in many ways emulates the system originally developed by scientists; yet the literature published in the majority of these journals lack, as Price (1965a, 1970) has shown, one of the fundamental characteristics of the scientific literature: it does not cumulate or build upon itself as does the scientific literature. Citations to previous papers or patents are fewer and are most often to the author's own work. Publication occupies a position of less importance than it does in science where it serves to document the end product and establish priority. Because published information is at best secondary to the actual utilization of the technical innovation, this archival is not as essential to ensure the technologist that he is properly credited by future generations. The names of Wilbur and Orville Wright are not remembered because they published papers. As pointed out in chapter 1, the technologist's principal legacy to posterity is encoded in physical, not verbal, structure. Consequently the technologist publishes less and devotes less time to reading than do scientists.
Information is transferred in technology primarily through personal contact. Even in this, however, the technologist differs markedly from the scientist. Scientists working at the frontier of a particular specialty know each other and associate together in what Derek Price has called "invisible colleges." They keep track of one another's work through visits, seminars, and small invitational conferences, supplemented by an informal exchange of written material long before it reaches archival publication. Technologists, on the other hand, keep abreast of their field by close association with co-workers in their own organization. They are limited in forming invisible colleges by the imposition of organizational barriers.

I'll pause here to note that this bothers me somewhat. I enjoy few things more than learning from other people, especially if they inhabit different worlds than I do. Allen continues:

BUREAUCRATIC ORGANIZATION
Unlike scientists, the vast majority of technologists are employed by organization with a well-defined mission (profit, national defense, space exploration, pollution abatement, and so forth). Mission-oriented organizations necessarily demand of their technologists a degree of identification unknown in most scientific circles. This organizational identification works in two ways to exclude the technologist from informal communication channels outside his organization. First, he is inhibited by the requirements that he work only on problems that are of interest to his employer, and second, he must refrain from early disclosure of the results of his research in order to maintain his employer's advantage over competitors. Both of these constraints violate the rather strong scientific norms that underlie and form the basis of the invisible college. The first of these norms demands that science be free to choose its own problems and that the community of colleagues be the only judges of the relative importance of possible areas of investigation, and the second is that the substantive findings of research are to be fully assigned and communicated to the entire research community. The industrial organization, by preventing its employers from adhering to these two norms, impedes the formation by technologists of anything resembling an invisible college.

Incidentally, I believe that companies lose more by inhibiting cross pollination than they gain by protecting their competitive position. It would appear that Allen would agree, at least to an extent. On page 42:

The Effect of Turnover
It is this author's suspicion that much of the proprietary protectionism in industry is far overplayed. Despite all of the organizational efforts to prevent it, the state of the art in technology propagates quite rapidly. Either there are too many martinis consumed at engineering conventions or some other mechanism is at work. This other mechanism may well be the itinerant engineer, who passes through quite a number of organizations over the course of a career...
Each time that an engineer leaves an employer, voluntarily or otherwise, he carries some knowledge of the employer's operations, experience, and current technology with him. We are gradually coming to realize that human beings are the most effective carriers of information and that the best way to transfer information between organizations or social systems is to physically transfer a human carrier. Roberts' studies (Roberts and Wainer, 1967) marshal impressive evidence for the effective transfer of space technology from quasi-academic institutions to the industrial sector and eventually to commercial applications in those instances in which technologists left university laboratories to establish their own businesses. This finding is especially impressive in view of the general failure to find evidence of successful transfer of space technology by any other mechanism, despite the fact that many techniques have been tried and a substantial amount of money has been invested in promoting the transfer.
This certainly makes sense. Ideas have no real existence outside of the minds of men. Ideas can be represented in verbal or graphic form, but such representation is necessarily incomplete and cannot be easily structured to fit new situations. The human brain has a capacity for flexibly restructuring information in a manner that has never been approached by even the most sophisticated computer programs. [Just jumping in here to say bravo. -SW] For truly effective transfer of technical information, we must make use of this human ability to recode and restructure information so that it fits into new contexts and situations. Consequently, the best way to transfer technical information is to move a human carrier. The high turnover among engineers results in a heavy migration from organization to organization and is therefore a very effective mechanism for disseminating technology throughout an industry and often to other industries. Every time an engineer changes jobs he brings with him a record of his experiences on the former job and a great amount of what his former organization considers "proprietary" information. Now, of course, the information is usually quite perishable, and its value decays rapidly with time. But a continual flow of engineers among the firms of an industry ensures that no single firm is very far behind in knowledge of what its competitors are doing. So the mere existence of high turnover among R&D personnel vitiates much of the protectionism accorded proprietary information.
As for turnover itself, it is well known that most organizations attempt to minimize it. If all of the above is even partially true, a low level of turnover could be seriously damaging to the interests of the organization. Actually, however, quite the opposite is true. A certain amount of turnover may be not only desirable but absolutely essential to the survival of a technical organization, although just what the optimum turnover level is for an organization is a question that remains to be answered. It will vary from one situation to the next and is highly dependent upon the rate at which the organization's technical staff is growing. After all, it is the influx of new engineers that is most beneficial to the organization, not the exodus of old ones. When growth rate is high, turnover can be low. An organization that is not growing should welcome or encourage turnover. The Engineers' Joint Council figure of 12 percent may even be below the optimum for some organizations. Despite the costs of hiring and processing new personnel, an organization might desire an even higher level of turnover. Although it is impossible to place a price tag on the new state-of-the-art information that is brought in by new employees, it may very well more than counterbalance the costs of hiring. This would be true at least to the point where turnover becomes disruptive to the morale and functioning of the organization. 

Allen also discusses the degree two which academia influences technology development. On page 51:

Project Hindsight was the first of a series of attempts to trace technological advances back to their scientific origins. Within the twenty-year horizon of its backward search, Hindsight was able to find very little contribution from basic science (Sherwin and Isenson, 1967). In most cases, the trail ran cold before reaching any activity that could be considered basic research. In Isenson's words, "It would appear that most advances in the technological state of the art are based on no more recent advances than Ohm's Law or Maxwell's equations."

On page 52:

In yet another recent study, Langrish found little support for a strong science-technology interaction. Langrish wisely avoided the problem of differentiating science from technology. He categorized research by the type of institution in which it was conducted - industry, university, or government establishment. In tracing eighty-four award-winning innovations to their origins, he found that "the role of university as a source of ideas for [industrial] innovation is fairly small" (Langrish, 1971) and that "university science and industrial technology are two quite separate activities which occasionally come into contact with each other" (Langrish, 1969). He argued very strongly that most university basic research is totally irrelevant to societal needs and can be only partially justified for its contributions through training of students.

That's tough stuff, if you ask me. Incidentally, I've considered many times recently whether I myself would go to college if I was just graduating high school today. It would not be a straightforward choice.

Then Allen turned to the qualities of the things that engineers actually read. On page 70:

A MORE DETAILED EXAMINATION OF WRITTEN MEDIA
Looking first at the identity of the publications that were read, there are two major categories of publications that engineers use. The first of these might be called formal literature. It comprises books, professional journals, trade publications, and other media that are normally available to the public and have few, if any, restrictions on their distribution. Informal publications, on the other hand, are published by organizations usually for their own internal use; they often contain proprietary material and for that reason are given a very limited distribution. On the average, engineers divide their attention between the two media on about an equal basis, only slightly favoring the informal publications (table 4.3). Because engineering reports are usually much longer than journal articles and because books are used only very briefly for quite specific purposes, each instance of report reading takes twice as long as an instance of journal or book reading. The net result is a threefold greater expenditure of time on informal reports. We can conclude from this brief overview that the unpublished engineering report occupies a position that is at least as important as that of the book or journal in the average engineer's reading portfolio.

Here I should note that I read this through the lens of someone whose public blog is essentially an ongoing and highly detailed series of informal reports. I'm certainly no scientist, and in general my writing isn't particularly academic. I'm doing decidedly applied work, and I document it (including what most companies would call proprietary information about my products and the results of my research) for anyone to read and repurpose as they please. 

Allen continues, explaining why engineering journals aren't really used by practicing engineers. On page 73:

The publications of the professional engineering societies in all of these diverse fields are little used by their intended audience.
Why should this be so? The answer is not difficult to find. Most professional engineering journals are utterly incomprehensible to the average engineer. They often rely heavily upon mathematical presentations, which can be understood by only a limited audience. The average engineer has been away from the university for a number of years and has usually allowed his mathematical skills to degenerate. Even if he understood the mathematics at one time, it is unlikely that he can now. The articles, even in engineering society journals, are written for a very limited audience, usually those few at the very forefront of a technology. Just as in science, the goal of the author is not to communicate to the outsider but to gain for himself the recognition of his peers.

It's funny: the purpose of this blog is to communicate with outsiders AND gain the recognition of my peers. I'd like to think, in fact, that it fits the description of the ideal engineering literature that Allen puts forth on page 75:

The professional societies could publish a literature form whose technical content is high, but which is understandable by the audience to whom it is directed...The task is not an impossible one. Engineers will read journals when these journals are written in a form and style that they can comprehend. Furthermore, technological information can be provided in this form. Why then do the professional societies continue to publish material that only a small minority of their membership can use? If this information can be provided in a form that the average engineer can understand, why haven't the professional societies done so?
The obvious answer to these questions is that the societies have only recently become aware of the problem. In the past, they were almost totally ignorant of even the composition of their membership, and they still know littler of their information needs. Thus, they have never had the necessary information to formulate realistic goals or policy. Perhaps the most unfortunate circumstance that ever befell the engineering profession in the United States is that at the time when it first developed a self-awareness and began to form professional societies, it looked to the scientific societies, which had then existed for over 200 years, to determine their form and function.

Interestingly, though, I do not fit the description of the engineer that Allen gives on page 99:

THE IMPORTANCE OF COMMUNICATION WITHIN THE LABORATORY
Most engineers are employed by bureaucratic organizations. Academic scientists are not. The engineer sees the organization as controller of the only reward system of any real importance to him and patterns his behavior accordingly. While the academic scientist finds his principal reference group and feels a high proportion of his influence from outside the organization, for the engineer, the exogenous forces simply do not exist. The organization in which he is employed controls his pay, his promotions, and, to a very great extent, his prestige in the community.

To be clear, I get a ton out of working closely with people. I worked alone building bikes for a full three years, and was solo and very isolated during much of the two year construction project I completed after college; the lack of camaraderie in those situations was hard on me. I learned through that process that working with people - and having a mutual feeling of respect and enthusiasm - was incredibly important. I've gotten a ton out of all of the colleagues I've had since then - including many who I initially clashed with.

But exogenous forces in my life absolutely exist, and are important too. I benefit greatly from keeping contact with people elsewhere in my industry - and people outside of it - and I'm confident that the companies I've worked for have benefited from my network too.

My belief is that there's more room for these things to coexist than most companies realize. As evidence, I would present that when I began working on metal 3D printing, I knew nothing about it - and didn't work at a company that had any particular interest about it in the first place. I believe that it is only through my openness that I've gotten where I am today, and through that openness I've also vastly improved my access to experienced engineers across the industry. I've gotten cold emails from people working at some of the biggest and most advanced R&D organizations in the world, something I don't think would ever have happened had I not shared the way I did. And I'm confident that the my relationships with these people are mutually beneficial - both to us as people and to the companies who employ us.

I'm about a third through Managing the Flow of Technology now; I'll probably finish it in the next month. I recommend it.

EBM surface finishes and MMP

Added on by Spencer Wright.

When I visited MicroTek Finishing, a Cincinnati based precision finishing company, in late 2014, I was intent on printing my seatmast topper with laser powder bed fusion. DMLS's install base is relatively large, making it easy to source vendors and compare pricing. And while their surface finish and dimensional accuracy can leave something to be desired, DMLS parts can be put into service with minimal post processing.

But as I was saying goodbye to Tim Bell (my host at MicroTek) that afternoon, he planted a seed. I should try building my parts in EBM, he said - and see if MicroTek's MMP process could bring the rough parts up to a useable state.

That same day, I asked Dustin and Dave (both of whom I worked with on my seatmast topper) what they thought of the idea. Dave had extensive experience on an Arcam A2, and thought it was definitely worth trying out. Relative to DMLS, EBM is a quick process (for more details on Arcam and EBM, see the Gongkai AM user guide), and a big portion of the cost structure of metal AM parts is the amount of time they take to print. Furthermore, parts can often be stacked many layers high on EBM machines, allowing the fixed costs of running a build to be distributed over a larger number of parts. And while EBM parts do tend to be rough (and have larger minimum feature sizes than DMLS), they also tend to warp and distort less - making the manufacturing plan a bit simpler in that respect.

Shortly after that trip, I reached out to Addaero Manufacturing. I visited them soon after, and then asked if they'd be interested in exploring an EBM->MMP process chain. They were, and provided three identical parts to experiment on.

The part in question is the head of a seatpost assembly for high end road bikes. The part itself is small - about 70mm tall and with a 35mm square footprint. As built, it's just 32g of titanium 6/4. Add in a piece of carbon fiber tubing (88g for a 300mm length) and some rail clamp hardware (50g), and the entire seatpost assembly should be in the 175g range - on par with the lightest seatposts on the market today.

As a product manager who's ultimately optimizing for commercial viability, I had three questions going into this process:

  1. How do the costs of the different manufacturing process chains compare? 
  2. How do the resulting parts compare functionally, i.e. in destructive testing?
  3. Functionality being equal, how do the aesthetics (and hence desirability) of the parts compare?

I'll write more about the second point later; in this post, my primary aim is to introduce MMP and compare the different process chains from a financial and operational standpoint.

Basics of surface texture

As confirmed by a 1990 NIST report titled Surface Finish Metrology Tutorial, "there is a bewildering variety of techniques for measuring surface finish." Moreover, most measurement methods focus only on the primary texture - the roughness itself - and incorporate some method of controlling for waviness and form. From the same report:

The measured profile is a combination of the primary and secondary texture. These distinctions are useful but they are arbitrary in nature and hence, vary with manufacturing process. It has been shown, but not conclusively proven that the functional effects of form error, waviness and roughness are different. Therefore, it has become an accepted practice to exclude waviness before roughness is numerically assessed.

Surface finish is usually measured using the stylus technique:

The most common technique for measuring surface profile makes use of a sharp diamond stylus. The stylus is drawn over an irregular surface at a constant speed to obtain the variation in surface height with horizontal displacement.

The most common surface texture metric is Ra. (For a good, quick, technical description of the varieties of surface texture metrics, see this PDF from Accretech.) Ra measures the average deviation of the profile from the mean line (the related Rq also measures deviation from the mean line, but using a root mean square method), and is used across a variety of industries and manufacturing methods. But it's incapable of describing a number of important aspects of a part. For instance, it's critical (for both aestetic and functional reasons) that my parts have Rsk (skewness) values close to zero - meaning that their surfaces are free from flaws like pits and warts. In other words, I'd take a consistent, brushed surface over one that's highly polished but has a few deep cuts/pits.

I should note, of course, that surface finish is a result of the total manufacturing process chain. If the near net shape part (straight out of the EBM machine) is rough and pitted, then it'll be difficult to ever make it acceptable - and the methods required to do so will vary widely. 

MicroTek and MMP

MicroTek is just one in an international network of companies that perform MMP, which grew out of a Swiss company called BESTinCLASS Industries. The MMP process is closely guarded; neither MicroTek nor BiC disclose enough about the process to really understand how it works. From MicroTek's website:

MMP Technology is a mechanical-physical-catalyst surface treatment applied to items placed inside a processing tank.  MMP technology is truly different from traditional polishing processes because of the way it interacts with the surface being treated.
MMP Technology uses a mechanical cutting process at a very small scale (not an acid attack or any other process that could alter the part's metallurgical properties), meaning it can distinguish between micro-roughness and small features. The process actually maps the surface as a collection of frequencies of roughness, removing first the highest frequencies, then removing progressively lower frequencies.
Unlike other polishing processes, MMP Technology can stop at any point along the way, so now for the first time it is possible to selectively remove only the ranges of roughness that you don't want on the surface, giving you the option of leaving behind lower frequencies of roughness that could be beneficial to the function of the part.

To hear Tim and JT Stone tell it, MicroTek essentially does a Fourier transform on the topography of the part. They analyze the surface finish as the combination of many low and high frequency functions, and begin the MMP process by characterizing those different functions and identifying which ones to remove. Then, by selecting "an appropriate regimen of MMP Technology from the several hundred treatments available," they selectively remove the undesirable aspects of the surface finish - while still preserving the underlying form of the part.

This is worth highlighting: traditionally, polishing is a process whereby a part is eaten away by abrasive media. With each successive step, progressively smaller scratches are made in the part's surface. You're constantly cutting down the peaks of the part, and as a result the form gets smaller and smaller over time. With MMP, you have the flexibility to remove fine frequencies while keeping longer ones - maintaining the original intended shape.

The parts

Addaero printed three identical parts for me. I sent two to MicroTek. They processed one for fatigue resistance, and the other they "made BLING."

(Note that you can click on the photos above to see a larger version.)

MicroTek sent detailed inspection reports with the parts, and the picture they paint is fascinating. MMP reduced both Ra and Rq drastically, and Rt dropped significantly as well. Rsk is a bit of a different story, however: in one of the measurement locations ("Side of leg"), it dropped well into the negative range. You'll recall that the absolute value of skewness is really the issue here; a negative number (indicating pitting) is just as bad as a positive (indicating warts/spikes) one.

I've put the raw data in a Google Sheet, here; the full inspection reports are here and here. The charts below show most of the relevant information, broken down by the area of the part being tested. A helpful description of the part's areas ("V-neck face," etc) is here.

Ra - Roughness average

All values in μm

Rq - Roughness, root mean square

All values in μm

Rsk - Roughness skewness

All values in μm

Rt - Roughness total

All values in μm

MicroTek also sent a series of photos taken with a Hirox digital microscope at a variety of magnifications:

If it's not clear from all of the photos and charts above, the improvement on the parts due to MMP is really remarkable. The as-printed part is really rough - on average, it's about as rough (Ra/Rq) as 120 grit sandpaper (see this for a good analysis of sandpaper surface texture). MicroTek was able to eliminate the vast majority of the total and arithmetic mean roughness; both parts they processed feel very much like finished products.

The pitting, however, is a problem. To be clear, it's not a result of the MMP process; all they did was expose flaws that were already in the part. Many of these could probably be eliminated on future batches. First, printing the parts on a newer Arcam system (like the Q20) might improve the as-printed texture significantly. And second, MicroTek can investigate more complex treatments that allow for the offending frequencies to be eliminated more thoroughly. I'll be exploring these (and other) options in the coming months.

Assembly

Before putting the seatposts together, a little bit of prep was necessary. The inner diameter of both the seatpost and saddle clamp cylinders were slightly undersized, and there were warts (remnants of support structures) left on the undersides of the shoulder straps. I had intentionally left these untouched when I sent the parts to MicroTek, as I wanted to see how little post processing I could get away with. The MMP process took them down slightly, but not nearly enough to put the parts into service.

Fixing that was pretty straightforward - just a few minutes each with a file. In future iterations, I'm hoping that by making some light design modifications - and by dialing in the EBM build parameters - will minimize this work. If not, then I'll probably add CNC machining into my process chain (after printing and before finishing).

With the inner diameters trued up, the parts could be dry fitted to the carbon fiber tubing I'm using as a seatpost:

I'll be gluing the assemblies together with 3M DP420 this week, and then I'll send them out for testing. These parts will be put to the same ISO standard that my seatmast topper passed last summer, and I'm particularly curious to know whether the different levels of post processing has any effect on their strength. In high fatigue cycle applications (this paper defines "high fatigue cycle" as N>100,000, which is exactly what my parts will be tested to), improvements in surface finish (lower Ra) have been shown to increase fatigue life. If some form of surface finishing (MMP or otherwise) means that I can print a lighter AND stronger part, that'll definitely help justify the expense.

Cost

With my current design and a batch size of 275 (a full batch in an Arcam Q20) my as printed cost will be under $100. MMP will cost an additional $40-75 (depending on finish level), though those numbers were based on smaller quantities. I'd hope that the rollup cost to me is under $150.

In addition to these parts, a full seatpost requires about $25 worth of carbon fiber, a few dollars worth of glue, and (I suspect) under ten minutes of assembly time. They'll also require saddle rail hardware, which I'm budgeting an additional $25 for, and some packaging - under $10. All told, my cost of goods sold would be about $215.

That's a fancy seatpost, but it's not completely unreasonable. My goal, at the moment, is to get that price down to $150.

More updates soon :)


Thanks to Addaero and MicroTek for their ongoing help with this project.

Photos from NYIO's trip to the Hudson Yards project

Added on by Spencer Wright.

Last week, the New York Infrastructure Observatory was lucky enough to tour the Hudson Yards Redevelopment project - the largest private real estate development project in US History. From my announcement email:

I just want to reiterate that: This is 26+ acres of active rail yard, on which Related Companies and Oxford Properties are building over 12 million square feet of office, residential, and retail space, designed by Kohn Pedersen Fox. And the trains underneath (did I mention that all of this development is being built on a huge platform supported by columns?) will keep running throughout construction.

The Hudson Yards project will remake the a big part of the NYC skyline, and includes large changes to the infrastructure in the area. It's a once in a generation project, and it was *really* great to see it in person.

You can see my photos (with captions, if you click them) below. Gabe Ochoa also posted a bunch on his blog, which I recommend checking out too!

Hudson Yards from the new 7 train entrance

Also: You should really read The Power Broker.

On PDFs

Added on by Spencer Wright.

I've written offhand things about PDFs before, but Ben Wellington is clear and straightforward in this post:

     You see, PDFs are where information goes to die, rather than to be used.

If you have something to communicate, think *really* hard about whether you're okay with it dying. This goes for *way* more than just public data, too. Product info, scientific research, industry knowledge... Put it in a PDF, and it's frozen.

This is why Gongkai AM is on GitHub. It's a weird platform for most people in R&D and engineering, but one that allows for *way* more flexibility and longevity. 

Rsk

Added on by Spencer Wright.

From NIST 89-4088, "Surface Finish Metrology Tutorial," a simple graphic showing the role of Rsk when evaluating surface texture:

Rsk describes surface texture skewness. Each of the above surfaces have the same roughness average (Ra), but they differ greatly in their skewness. Rsk describes this difference, allowing for a pitted surface to be distinguished from a spiky one.

This is useful.

Coming soon

Added on by Spencer Wright.

Today I *finally* found time to photograph the parts that I got back from MicroTek a few weeks ago:

As you can probably see, the part on the left is unfinished. In the middle is an intermediate finish (~25µ" Ra), and on the right is a fine finish (~1.5µ" Ra). All three of these parts were printed on Addaero's Arcam A2X; their raw finish is about 600µ" Ra. 

Incidentally, I'll note that photographing the surface finishes on these parts has been remarkably challenging. I probably need a strobe or something, but hey - it's a labor of love.

I'll be writing up the results in the next few weeks. Stay tuned!

Speaking

Added on by Spencer Wright.

Just a PSA: I'm speaking at three upcoming conferences! Give me a holler if you'll be at any of them - I'd love to chat.

  • Develop3D Live, 2016.03.31. Come to Warwick, UK and talk to the cream of the crop in CAD software.
  • AMUG, 2016.04.03-07. St. Louis! The biggest group of additive manufacturing users in the world. A *great* place to meet people working actively in AM, and trade process knowledge & expertise.
  • RAPID, 2016.05.17-19. North America's largest 3D printing event moves to Orlando. A great place to survey the industry and see what's coming up.

Good newsletters

Added on by Spencer Wright.

Prompted by Brendan, I wanted to list a few good email newsletters that I subscribe to and consistently enjoy. If you know of one that isn't on this list, give a holler and I'll check it out.

  • My own The Prepared <- shameless plug.
  • Reilly Brennan's Future of Transportation. Lot of autonomous car stuff here, but also great coverage of less hyped-up developments. I particularly like the "Patents & patent applications" section - mostly because it's a useful category of stuff, which I've really yet to develop for The Prepared.
  • Alexis Madrigal's Real Future, formerly Five Intriguing Things. Lot of obscure stuff here, all appropriately nerdy :)
  • Jon Russell's Asia Tech Review. Most of this is way beyond my interest level, but I like to keep more or less up to date on China and Jon does a good job at that.
  • Ingrid Burrington's Infrastructure Time. I'm not 100% sure that this is continuing beyond Ingrid's trip to the west, but I *really* liked her format and subject matter. 
  • Tilly Minute's New Yorker Minute. This got a lot of coverage recently as a way to cheat and act like you had read the New Yorker, but I find it really useful for it's real purpose: as a filter for what to devote my attention to.

I'll also mention Benedict Evan's newsletter. I don't really enjoy it much anymore, but you should be aware of nonetheless.

Not as hopelessly unyielding

Added on by Spencer Wright.

From a piece in the New Yorker about The Ford Foundation (lightly edited on my part):

The urge to change the world is normally thwarted by a near-insurmountable barricade of obstacles: failure of imagination, failure of courage, bad governments, bad planning, incompetence, corruption, fecklessness, the laws of nations, the laws of physics, the weight of history, inertia of all sorts, psychological unsuitability on the part of the would-be changer, the resistance of people who would lose from the change, the resistance of people who would benefit from it, the seduction of activities other than world-changing, lack of practical knowledge, lack of political skill, and lack of money.
Lack of money is a stubborn obstacle, but not as hopelessly unyielding as some of the others.

The above was written in the context social justice, but much of the paradox in this article translates to business too. While lack of money can certainly screw you up, it's more common to fail because of the multitude of other factors working against you - many of which are *far* more difficult to overcome than lack of money.

I don't want to be Amazon.

Added on by Spencer Wright.

GE CEO Jeff Immelt, talking with Henry Blodget:

There's a lot of people who have gotten fired thinking they're Jeff Bezos. So I don't want to be Amazon. I want to be GE.

This is right after Immelt refers to Bezos as someone he admires.

I've thought and written about corporate self awareness before (in particular with respect to Amazon and McMaster-Carr; see also the last paragraph or two of this old post about Pixar), but it's been increasingly on my mind recently. 

Knowing who you are - and having the fortitude to act accordingly - is key.

Joining nTopology

Added on by Spencer Wright.

Nine months ago I had one of those random conversations where you walk away feeling thrilled to be working in an industry with such compelling, intelligent people.

I had met Bradley before then (there are only so many people working on additive manufacturing in NYC), but only in passing. In the meantime our paths had diverged somewhat. He was working hard on design software, whereas I had focused on getting industrial AM experience through developing a physical product. But our approaches to the industry had converged, and we had developed a shared enthusiasm for addressing the technological problems in AM head on. We became instant allies, and started swapping emails on a weekly basis. 

In August, when nTopology launched their private beta program, I jumped at the chance to use it in my own designs. The engineering advantages of lattice structures were immediately evident, and nTopology's rule-based approach allowed me to quickly develop designs that met my functional goals. And as I spent more time with nTopology's software - and got to know Greg, Matt, Erik, and Abhi - my enthusiasm about what they were building only grew.

Today I'm thrilled to announce that I'm joining nTopology full time, to run business operations and help direct product strategy. nTopology's team, mission, and product are all precisely what I've been looking for since I began working on additive manufacturing, and I can't wait for the work we've got ahead of us.

For posterity, here are a few thoughts about nTopology's approach towards design for additive manufacturing:

  1. From the very beginning of my work in AM, it was evident that traditional CAD software would never let me design the kinds of parts I wanted. I was looking for variable density parts with targeted, anisotropic mechanical properties - things that feature-based CAD is fundamentally incapable of making. nTopology's lattice design software, on the other hand, can. 
  2. As the number of beams in a lattice structure increases beyond a handful, designing by engineering intuition alone becomes totally impractical. It's important, then, to run mechanical simulations early on, and use the results to drive the design directly. nTopology let me do just that.
  3. nTopology's approach towards optimization lets me, the engineer, set my own balance between manual and algorithmic design. This is key: when I intuitively know what the design should look like, I can take the reins. When I'd rather let simulation data drive, that's fine too. The engineering process is collaborative - the software is there to help, but gets out of the way when I need it to.
  4. Best of all, nTopology doesn't limit me to design optimization - it lets me design new structures and forms as well. That means far more flexibility for me. No longer am I locked into design decisions artificially early in my workflow, when a lot of the effects of those decisions are unknown. nTopology gives a fluid transition from mechanical CAD to DFM, and lets me truly consider - and adjust - my design's effectiveness and efficiency throughout the process.

The nTopology team has shown incredible progress in a tiny amount of time. They've built a powerful, valuable, and intuitive engineering tool in less than a year - and have set a trajectory that points towards a paradigm shift in additive manufacturing design.

In the coming months, I'll be writing more about our company, our mission, and our design workflow. If you're an engineer, developer, or UI designer interested in working on the future of CAD, send me a note or see our job postings on AngelList. To learn more about purchasing a license of nTopology Element, get in touch with me directly here.

Two years of The Prepared

Added on by Spencer Wright.

I began writing The Prepared, my weekly manufacturing newsletter, two years ago. I wrote a year-in-review of sorts this time last year, and thought I'd update it here.

First: The Prepared's subscriber list has increased by 185%, from 195 to 556. Its cumulative open and click rates are 52.3% (down from 54%) and 28.6% (down from 29%) respectively. 26 people unsubscribed in 2015.

Less tangibly but equally important, I feel notably closer to my subscribers than I did last year. I've connected with many of them by email or on Twitter, and have had more phone calls and coffees than I can keep track of. And as I've focused my area of interest, my audience has become more focused too. Increasingly, it includes people doing some of the most serious and interesting work in manufacturing today.

I also, for the first time, had a guest editor this year: Eric Weinhoffer, who filled in while I was on my honeymoon. Handing over the keys was good, and made me think about what The Prepared might look like if it weren't just my weekly manufacturing newsletter. I'm not sure whether I'll pursue a change in the near term (maintaining the current course is probably the path of least resistance for now), but even the possibility was interesting to consider.

As I wrote last year, The Prepared is "arguably the single most popular and useful thing that I do." It continually pushes me to make my knowledge base both broader and deeper, and has brought more people into my life than almost anything I've ever done.

Here's to another year!

Computer aided design

Added on by Spencer Wright.

Over the past week, one particular tweet has showed up in my timeline over and over:

 
 

The photos in this tweet have been public for over a year now. I've been aware of the project since last June; it was created by Arup, the fascinating global design firm (whose ownership structure is similarly fascinating). They needed a more efficient way to design and manufacture a whole series of nodes for a tensile structure, and for a variety of reasons (including, if I recall correctly, the fact that each node was both unique and difficult to manufacture conventionally) they decided to try out additive manufacturing. As it happens, I was lucky enough to speak to the designer (Salomé Galjaard) by phone a few months ago, and enjoyed hearing about the way they're thinking of applying AM to large construction projects.

In short: I'm a fan of the project, and love to see it get more exposure. There's something about the particular wording of Jo Liss's tweet, though, that is strange to me. Specifically, I find myself asking whether a computer did, indeed, design the new nodes.

(Note: I don't know Jo Liss and don't mean to be overly critical of her choice of wording; it's simply a jumping off point for some things I've been mulling over. I also don't believe that I have any proprietary or particularly insightful information about how Arup went about designing or manufacturing the nodes in question.)

As far as I can tell, Arup's process worked like so: Engineers modeled a design space, defined boundary conditions at the attachment points (which were predefined), and applied a number of loading conditions to the part. Here the story gets less clear; some reports mention topology optimization, and others say that Arup worked with Within (which is *not* topology optimization). My suspicion is that they used something like solidThinking Inspire to create a design concept, and then modeled the final part manually in SolidWorks or similar. Regardless, we can be nearly sure that the model that was printed was indeed designed by a human; that is, the actual shapes and curves we see in the part on the right were explicitly defined by an actual engineer, NOT by a piece of software. This is because nearly every engineered component in AEC needs to be documented using traditional CAD techniques, and neither Within nor solidThinking (nor most of the design optimization industry) supports CAD export. As a result, most parts that could be said to be "designed by a computer" are really merely sketched by a computer, while the actual design & documentation is done by a human.

This may seem like a small quibble, but it's far from trivial. Optimization (whether shape, topology, or parametric) software is expensive, and as a result most of the applications where it's being adopted involve expensive end products: airplanes, bridges, hip implants, and the like. Not coincidentally, those products tend to have stringent performance requirements - which themselves are often highly regulated. Regulation means documentation, and regulating bodies tend not to be (for totally legitimate reasons which are a bit beyond the scope of this blog post) particularly impressed with some computer generated concept model in STL or OBJ format. They want real CAD data, annotated by the designer and signed off by a string of his or her colleagues. And we simply haven't even started to figure out how to get a computer to do any of that stuff.

I'm reminded here also of something that I've spent a bunch of time considering over the past six months. The name "CAD" (for Computer Aided Design) implies that SolidWorks and Inventor and Siemens NX are actively helping humans design stuff. To me, this means making actual design decisions, like where to put a particular feature or what size and shape an object should be. But the vast majority of the time that isn't the case at all. Instead, traditional CAD packages are concerned primarily with helping engineers to document the decisions that they've already made.

The implications of this are huge. Traditional CAD never had to find ways for the user to communicate design intent; they only needed to make it easy for me to, for instance, create a form that transitions seamlessly from one size and shape to another. For decades, that's been totally fine: the manufacturing methods that we had were primarily feature based, and the range of features that we've been good at making (by milling, turning, grinding, welding, etc) are very similar to the range of features that CAD packages were capable of documenting.

But additive manufacturing doesn't operate in terms of features. It deals with mass, and that mass is deposited layer by layer (with the exception of technologies like directed energy deposition, which is different in some ways but still not at all feature based). As a result, it becomes increasingly advantageous to work directly from design intent, and to optimize the design not feature by feature but instead holistically. 

One major philosophical underpinning of most optimization software (like both Within and solidThinking Inspire) is that the process of optimizing mass distribution to meet some set of design intentions (namely mechanical strength and mass, though longtime readers of this blog will know that I feel that manufacturability, aesthetics, and supply chain complexity must be considered in this calculation as well) is a task better suited to software than to humans. To that effect, they are squarely opposed to the history of Computer Aided Documentation. They want CAD software to be making actual design decisions, presumably with the input and guidance of the engineer.

If it's not clear, I agree with the movement towards true computer aided design. But CAD vendors will need to overcome a number of roadblocks before I'd be comfortable saying that my computer designs anything in particular:

First, we need user interfaces that allow engineers to effectively communicate design intent. Traditional CAD packages never needed this, and optimization software has only just begun the task of rethinking how engineers tell their computers what kind of decisions they need them to make. 

Second, we need to expand the number of variables we're optimizing for. Ultimately I believe this means iteratively focusing on one or two variables at a time, as the curse of dimensionality will make high dimensional optimization impractical for the foreseeable future. It's because of this that I'm bullish on parametric lattice optimization (and nTopology), which optimizes strength and weight on lattice structures that are (given input from the engineer) inherently manufacturable and structurally efficient.

Third, we need a new paradigm for documentation. This is for a few reasons. To start, the kinds of freeform & lattice structures that additive manufacturing can produce don't lend themselves to traditional three view 2D drawings. But in addition, there's a growing desire [citation needed] within engineering organizations to unify the design and documentation processes in some way - to make the model itself into a repository for its own design documentation.

These are big, difficult problems. But they're incredibly important to the advancement of functionally driven design, and to the integration of additive manufacturing's advantages (which are significant) into high value industries. And with some dedicated work by people across advanced design and manufacturing, I hope to see substantive progress soon :)


Thanks to Steve Taub and MH McQuiston for helping to crystalize some of the ideas in this post.

After publishing this post, I got into two interesting twitter conversations about it - one with Ryan Schmidt, and the other with Kevin Quigley. Both of them know a lot about these subjects; I recommend checking the threads out.

Photos from a visit to CCAT

Added on by Spencer Wright.

A few months back I had the pleasure of visiting the Connecticut Center for Advanced Technology, which is located on the UTC/Pratt & Whitney East Hartford campus. CCAT began as a facility focused on researching laser drilling, but has moved deeper into 3D printing, and specifically directed energy deposition, in the past few years. 

In addition to a full subtractive (manual and CNC) shop, CCAT has a few cool additive tools that I was particularly interested in. The first is an Optomec 850R LENS system. The 850R is a large format directed energy deposition machine which can be used for both new parts and repairs. It's also useful for material development, as DED machines can create parts with a small amount of powder (while powder bed fusion machines generally require a large amount of powder).

(Click on the photos for larger versions + descriptions)

The other thing I was excited to see was their Kuka HA30 robot, which has a coaxial laser cladding head attached to it. This robot can be used for either etching/engraving or cladding, meaning that it can either subtract or add material to a part. Especially when combined with the two-axis rotary table shown below, this thing can create some really complex parts.

It was really cool seeing these specialized technologies being used in real life. Thanks to CCAT for having me!

Photos from an antenna factory in Shenzhen

Added on by Spencer Wright.

This past July, Zach and I visited The Public Radio's antenna supplier in Shenzhen. I had only a vague idea of how antennas were made, and it was interesting to see the process in person. It was also fascinating to see a shop that relied so much on manual and mechanically driven machinery. 

A few observations:

  • This shop manufactures a variety of parts, with the defining feature being that they're made of tubing. For our antennas, the process works basically like this:
    • Tubing is bundled together with zipties and cut to length by wire EDM.
    • Tubing ends are swaged in/out.
    • Sections are assembled into a single telescoping unit
    • Meanwhile, end fittings are manufactured from solid stock. This happens either on the automatic turret lathes, or on single-operation manual machines (lathes/drill presses).
    • End fittings are installed on the telescoping antennas, again using swaging/forming processes.
  • The whole operation was decidedly low tech and manual - almost disturbingly so. It would seem very difficult to control quality - which I guess should be expected when you're looking at a niche, and rather inexpensive, commodity product.

A few of the photos have notes on them - click to show.

Murray Hill

Added on by Spencer Wright.

I'm reading The Idea Factory, and this description of Bell Labs' Murray Hill facility jumped out at me:

Kelly, Buckley, and Jewett were of the mind that Bell Labs would soon become - or was already - the largest and most advanced research organization in the world. As they toured industrial labs in the United States and Europe in the mid-1930s, seeking ideas for their own project, their opinions were reinforced. They wanted the new building to reflect the Labs' lofty status and academic standing - "surroundings more suggestive of a university than a factory," in Buckley's words, but with a slight but significant difference. "No attempt has been made to achieve the character of a university campus with its separate buildings," Buckley told Jewett. "On the contrary, all buildings have been connected so as to avoid fixed geographical delineation between departments and to encourage free interchange and close contact among them." The physicists and chemists and mathematicians were not meant to avoid one another, in other words, and the research people were not meant to evade the development people.
By intention, everyone would be in one another's way. Members of the technical staff would often have both laboratories and small offices - but these might be in different corridors, therefore making it necessary to walk between the two, and all but assuring a chance encounter or two with a colleague during the commute. By the same token, the long corridor for the wing that would house many of the physics researchers was intentionally made to be seven hundred feet in length. It was so long that to look down it from one end was to see the other end disappear at a vanishing point. Traveling its length without encountering a number of acquaintances, problems, diversions, and ideas would be almost impossible. Then again, that was the point. Walking down that impossibly long tiled corridor, a scientist on his way to lunch in the Murray Hill cafeteria was like a magnet rolling past iron filings.

Sounds like my kind of place.