Manufacturing guy-at-large.

Filtering by Tag: optimization

Only the last 10%

Added on by Spencer Wright.

From a conversation on engineering between Arup's Dan Hill and Tristam Carfrae:

In response to fears that this kind of 'algorithmic architecture' will marginalise engineers and architects, Carfrae states that this kind of approach is only really "optimising the last 10% of a problem." The software has to be described and tuned with a particular strategy or problem in mind, and that comes from the designer, not the software. 

The point here is one that I've argued many times in the past: Today's optimization approaches (and any in the adjacent possible future) do not, in fact, put computers in the driver's seat of engineering or design. Instead, they use computers to automate rote tasks that an engineer is interested in exploring.

I believe this distinction is critical, as it affects both the direction of CAD companies' efforts and the enthusiasm of a new generation of engineers. It's my desire to see the CAD industry prioritize efforts that'll have big, positive impacts on the world, and it's my goal to keep smart, driven people from becoming disillusioned with engineering. As a result, I'd encourage marketers, journalists and onlookers to seriously consider what they believe about optimization, and to be wary of anyone who tries to sell them an AI enabled Brooklyn Bridge.

For more background on optimization and the future of CAD software, see Displaced in space or time, The problem with 3D design optimization today, Computer aided design, and Exploration and explanation.

Exploration and explanation

Added on by Spencer Wright.

Apropos of Displaced in space or time, and just generally along the lines of what I spend a *lot* of time thinking about these days, a few thoughts on Michael Nielsen's recent post titled Toward an exploratory medium for mathematics. Note that my comments are largely placed in the field of CAD, while Nielsen is talking about math; hopefully the result isn't overly confusing.

Nielsen begins by separating out exploration from explanation:

Many experimental cognitive media are intended as explanations... By contrast, the prototype medium we'll develop is intended as part of an open-ended environment for exploration and discovery. Of course, exploration and discovery is a very different process to explanation, and so requires a different kind of medium.

I've touched on the explanatory aspects of CAD in the past (see in particular Computer aided design), but I had never really considered the dichotomy between exploration and explanation in such stark terms. This is partly a result of the fact that most CAD software has documentation built right into it. I've spent a *lot* of time using CAD tools to document parts in both 2D (multi-view PDFs) and 3D (STEP, STL, etc), and have had long conversations with engineers who swear up and down that design tools that don't make documentation easy aren't worth the time of day. 

My inclination is to think that the future will be increasingly integrated - in other words, that the divide between exploration and explanation is antiquated. But perhaps it's more useful to consider the many ways that (multifunctional CAD systems notwithstanding) these two aspects of engineering really have very little overlap. After all, my own CAD software has distinctly different interfaces for the two activities, and the way that I interact with the design interface is very different from the way my manufacturing partners will interact with my design explanations. Perhaps these activities could split even further; I see no a priori reason that this would be harmful at all.

Anyway, onward. Again, Nielsen - now talking specifically about the exploratory side of mathematics:

What we'd ideally like is a medium supporting what we will call semi-concrete reasoning. It would simultaneously provide: (1) the ability to compute concretely, to apply constraints, and to make inferences, i.e., all the benefits we expect a digital computer to apply... and (2) the benefits of paper-and-pencil, notably the flexibility to explore and make inferences about impossible worlds. As we've seen, there is tension between these two requirements. Yet is is highly desirable that both be satisfied simultaneously if we are to build a powerful exploratory medium for doing mathematics. That is true not just in the medium I have described, but in any exploratory medium.

I'll just pause here to say that this idea of "semi-concrete reasoning" is fantastic. Humans are quite capable of holding conflicting values at the same time; if computers are to be our partners in design, they'll need to do some analog of the same.

Instead of using our medium's data model to represent mathematical reality, we can instead use the medium's data model to represent the user's current state of mathematical knowledge. This makes sense, since in an exploratory medium we are not trying to describe what is true – by assumption, we don't know that, and are trying to figure it out – but rather what the user currently knows, and how to best support further inference.

Having adopted this point of view, user interface operations correspond to changes in the user's state of mathematical knowledge, and thus also make changes in the medium's model of that state. There is no problem with inconsistency, because the medium's job is only to model the user's current state of knowledge, and that state of knowledge may well be inconsistent. In a sense, we're actually asking the computer to do less, at least in some ways, by ignoring constraints. And that makes for a more powerful medium.

On this point, I agree that inconsistency itself isn't an issue at all - so long as it's made explicit to the user at all times. If a design fails to meet my needs for, say, manufacturability, then I should have some way of knowing that immediately - whether or not I choose to deal with it now or ever. Again, Nielsen:

Ideally, an exploratory medium would help the user make inferences, give the user control over how these inferences are made, and make it easy for the user to understand and track the chain of reasoning.

Yes.

Using the medium to support only a single stage of inference has several benefits. It naturally makes the chain of inference legible, since it mirrors the way we do inference with paper-and-pencil, every step made explicit, while nonetheless reducing tedious computational work, and helping the user understand what inferences are possible. It's also natural psychologically, since the user is already thinking in terms of these relationships, having defined the objects this way. Finally, and perhaps most importantly, it limits the scope of the interface design problem, since we need not design separate interfaces for the unlimited(!) number of possible inferences. Rather, for every interface operation generating a mathematical object, we need to design a corresponding interface to propagate changes. That's a challenging but finite design problem. Indeed, in the worst case, a “completely manual” interface like that presented earlier may in general be used.

With that said, one could imagine media which perform multiple stages of inference in a single step, such as our medium modifying ss in response to changes in the tangent. Designing such a medium would be much more challenging, since potentially many more relationships are involved (meaning more interfaces need to be exposed to the user), and it is also substantially harder to make the chain of reasoning legible to the user.

Even with the simplification of doing single-step inference, there are still many challenging design problems to be solved. Most obviously, we've left open the problem of designing interfaces to support these single stages of inference. In general, solving this interface design problem is an open-ended empirical and psychological question. It's an empirical question insofar as different modes of inference may be useful in different mathematical proofs. And it is a psychological question, insofar as different interfaces may be more or less natural for the user. Every kind of relationship possible in the medium will require its own interface, and thus present a new design challenge. The simplest way to meet that challenge is to use a default-to-manual-editing strategy, mirroring paper-and-pencil.

I recognize that this is a somewhat long quote, but I think it's really critical. To paraphrase: Designing a UI that allows for multidimensional problems is *hard,* and it's hard for human users to glean actionable information from multidimensional data. 

Instead, we should break UIs up into discrete steps, allowing users to visualize and understand relationships piecewise. This means more individual UI modalities need to be designed, but by defaulting to manual editing strategies - which are damn good (viz. paper and pencil) to start with - even that task becomes manageable.

There's a lot here; I recommend reading the original post in its entirety. 

Computer aided design

Added on by Spencer Wright.

Over the past week, one particular tweet has showed up in my timeline over and over:

 
 

The photos in this tweet have been public for over a year now. I've been aware of the project since last June; it was created by Arup, the fascinating global design firm (whose ownership structure is similarly fascinating). They needed a more efficient way to design and manufacture a whole series of nodes for a tensile structure, and for a variety of reasons (including, if I recall correctly, the fact that each node was both unique and difficult to manufacture conventionally) they decided to try out additive manufacturing. As it happens, I was lucky enough to speak to the designer (Salomé Galjaard) by phone a few months ago, and enjoyed hearing about the way they're thinking of applying AM to large construction projects.

In short: I'm a fan of the project, and love to see it get more exposure. There's something about the particular wording of Jo Liss's tweet, though, that is strange to me. Specifically, I find myself asking whether a computer did, indeed, design the new nodes.

(Note: I don't know Jo Liss and don't mean to be overly critical of her choice of wording; it's simply a jumping off point for some things I've been mulling over. I also don't believe that I have any proprietary or particularly insightful information about how Arup went about designing or manufacturing the nodes in question.)

As far as I can tell, Arup's process worked like so: Engineers modeled a design space, defined boundary conditions at the attachment points (which were predefined), and applied a number of loading conditions to the part. Here the story gets less clear; some reports mention topology optimization, and others say that Arup worked with Within (which is *not* topology optimization). My suspicion is that they used something like solidThinking Inspire to create a design concept, and then modeled the final part manually in SolidWorks or similar. Regardless, we can be nearly sure that the model that was printed was indeed designed by a human; that is, the actual shapes and curves we see in the part on the right were explicitly defined by an actual engineer, NOT by a piece of software. This is because nearly every engineered component in AEC needs to be documented using traditional CAD techniques, and neither Within nor solidThinking (nor most of the design optimization industry) supports CAD export. As a result, most parts that could be said to be "designed by a computer" are really merely sketched by a computer, while the actual design & documentation is done by a human.

This may seem like a small quibble, but it's far from trivial. Optimization (whether shape, topology, or parametric) software is expensive, and as a result most of the applications where it's being adopted involve expensive end products: airplanes, bridges, hip implants, and the like. Not coincidentally, those products tend to have stringent performance requirements - which themselves are often highly regulated. Regulation means documentation, and regulating bodies tend not to be (for totally legitimate reasons which are a bit beyond the scope of this blog post) particularly impressed with some computer generated concept model in STL or OBJ format. They want real CAD data, annotated by the designer and signed off by a string of his or her colleagues. And we simply haven't even started to figure out how to get a computer to do any of that stuff.

I'm reminded here also of something that I've spent a bunch of time considering over the past six months. The name "CAD" (for Computer Aided Design) implies that SolidWorks and Inventor and Siemens NX are actively helping humans design stuff. To me, this means making actual design decisions, like where to put a particular feature or what size and shape an object should be. But the vast majority of the time that isn't the case at all. Instead, traditional CAD packages are concerned primarily with helping engineers to document the decisions that they've already made.

The implications of this are huge. Traditional CAD never had to find ways for the user to communicate design intent; they only needed to make it easy for me to, for instance, create a form that transitions seamlessly from one size and shape to another. For decades, that's been totally fine: the manufacturing methods that we had were primarily feature based, and the range of features that we've been good at making (by milling, turning, grinding, welding, etc) are very similar to the range of features that CAD packages were capable of documenting.

But additive manufacturing doesn't operate in terms of features. It deals with mass, and that mass is deposited layer by layer (with the exception of technologies like directed energy deposition, which is different in some ways but still not at all feature based). As a result, it becomes increasingly advantageous to work directly from design intent, and to optimize the design not feature by feature but instead holistically. 

One major philosophical underpinning of most optimization software (like both Within and solidThinking Inspire) is that the process of optimizing mass distribution to meet some set of design intentions (namely mechanical strength and mass, though longtime readers of this blog will know that I feel that manufacturability, aesthetics, and supply chain complexity must be considered in this calculation as well) is a task better suited to software than to humans. To that effect, they are squarely opposed to the history of Computer Aided Documentation. They want CAD software to be making actual design decisions, presumably with the input and guidance of the engineer.

If it's not clear, I agree with the movement towards true computer aided design. But CAD vendors will need to overcome a number of roadblocks before I'd be comfortable saying that my computer designs anything in particular:

First, we need user interfaces that allow engineers to effectively communicate design intent. Traditional CAD packages never needed this, and optimization software has only just begun the task of rethinking how engineers tell their computers what kind of decisions they need them to make. 

Second, we need to expand the number of variables we're optimizing for. Ultimately I believe this means iteratively focusing on one or two variables at a time, as the curse of dimensionality will make high dimensional optimization impractical for the foreseeable future. It's because of this that I'm bullish on parametric lattice optimization (and nTopology), which optimizes strength and weight on lattice structures that are (given input from the engineer) inherently manufacturable and structurally efficient.

Third, we need a new paradigm for documentation. This is for a few reasons. To start, the kinds of freeform & lattice structures that additive manufacturing can produce don't lend themselves to traditional three view 2D drawings. But in addition, there's a growing desire [citation needed] within engineering organizations to unify the design and documentation processes in some way - to make the model itself into a repository for its own design documentation.

These are big, difficult problems. But they're incredibly important to the advancement of functionally driven design, and to the integration of additive manufacturing's advantages (which are significant) into high value industries. And with some dedicated work by people across advanced design and manufacturing, I hope to see substantive progress soon :)


Thanks to Steve Taub and MH McQuiston for helping to crystalize some of the ideas in this post.

After publishing this post, I got into two interesting twitter conversations about it - one with Ryan Schmidt, and the other with Kevin Quigley. Both of them know a lot about these subjects; I recommend checking the threads out.

On Optimization

Added on by Spencer Wright.

As I've explored further into the obscure regions of design for additive manufacturing, I've been thinking a lot about the philosophical underpinnings of optimization, and the role that design optimization can play in product development. Optimization is in the air today; the major CAD vendors all seem to have an offering which purports to create "the ideal part" with "optimum relation between weight, stiffness and dynamic behavior" and "the aesthetics you want." These promises are attractive for seemingly obvious reasons, but it's less clear how design optimization (at least as it exists today) actually affects the product development process.

Product development inherently involves a three-way compromise between quality, cost, and speed. The most critical trait of a product manager is the ability to establish a balance between these three variables, and then find ways to maintain it.

Understanding the strengths and limitations of manufacturing processes is, then, invaluable to me as a product manager. Given infinite resources, people are pretty good at making just about anything that can be designed; there are designers out there who make very successful careers just by pushing the boundaries of what is possible, and employing talented manufacturing engineers to figure out how to bring their designs into existence. But in my own experience, the more I understand and plan for the manufacturing process, the easier it has been to maintain a balance between quality and cost - and hence to create an optimal end product.

All of which makes me feel a strange disconnect when I encounter today's design optimization software, which always seems to focus specifically on creating Platonically perfect parts - with no regard for manufacturability or cost.

To be fair, traditional CAD programs don't usually have a strong manufacturability feedback loop either. Inventor, SolidWorks, and NX are all perfectly happy with me designing a fillet with a radius of .24999995" - when a 1/4" radius would work just fine and cost much less to manufacture. In this way, traditional CAD requires the user to have an understanding of the manufacturability of the features that she designs - a requirement which, given the maturity and nature of conventional manufacturing methods, is not unreasonable.

But the combination of additive manufacturing on one hand, and generative design on the other, produces vastly different effects. No longer does a designer work on features per se. There's no fillet to design in the first place, only material to move around in 3D space. Moreover, the complex interaction between a part's geometry and its orientation on the build platform produce manufacturability problems (overhanging faces and thermal stresses, to name two) that are difficult to predict - and much harder to keep in mind than things like "when you design fillets, make their radii round numbers."

The remarkable thing about AM design optimization software, then, isn't that it allows me to create expensive designs - it's that these kinds of manufacturing factors (orientation to the build platform, and the structural and thermal effects that it produces) aren't treated as things which need to be optimized for at all.

The purpose of optimization should be to help me, as a product manager, design optimal *products* - not to chase some Platonic ideal.

So: Give me a way to incorporate build orientation, overhanging faces, and slicing data into my designs. Those variables are critical to the balance between cost, quality, and speed; without them, the products I design will never be optimal. 

Remeshing wishlist

Added on by Spencer Wright.

So: I need to reduce overhangs on my lattice stem design. As you can see here in MeshMixer, there are a lot of them (highlighted in red/blue):

(Incidentally: If you know of a really easy way to measure the surface area of unsupported faces in an STL/OBJ, let me know! Right now I'm doing some crazy stuff in Blender (thanks, Alex) but I'd love a one-step process if there is one.)

Now as you'll recall, I'm generating these beams (and varying their thicknesses) in nTopology Element, but the method I'm using starts by looking at all the edges in an STL/OBJ that I create in MeshMixer. When I go to generate the lattice (in Element), a beam is created on every triangle edge of the mesh (which I created in MeshMixer). So if I want to control the orientation of the beams in the lattice, I really need to start with that input STL/OBJ.

But here's the thing: remeshing algorithms tend to prefer isotropic (equilateral) triangles, which result in a *lot* of beams that are oriented parallel to any given plane (e.g. the build plane). They also prefer nodes that have a valence close to 6 (valence refers to the number of other nodes that a given node is connected to).

This is mostly because most remeshers assume that you want to preserve features - a reasonable assumption, in general. But for the vast majority of my design (basically everywhere except the clamp faces and bolt hole features), I care *way* more about eliminating overhanging faces than I do about feature preservation.

Over the next week, I'll be playing with the underlying mesh a bunch more and trying to find a way to reliably reduce overhangs in my end design. Specifically, I'm looking at remeshing methods that result in:

  • Anisotropic triangles, specifically ones whose orientation I can set as a global variable. I want my triangles to be longer than they are wide/deep.
  • Nodes with valences <6. This will essentially reduce my beam count (I think).
  • A mesh which is adaptive (as opposed to regular), so that I can preserve my mechanical features (high density mesh) and still reduce beam count elsewhere (low density mesh).

I'm also interested in using some curved beams (especially in the clamp areas), but that's prioritized below the things above.

More soon!

Quick & closer

Added on by Spencer Wright.

From the end of the day yesterday:

This is still not manufacturable, and is still missing all the mechanical features too. But it's getting there! By combining a skin lattice (which my part definitely needs in at least some regions, for instance the clamp faces) and a minimal, bonelike volume lattice, I hope to be able to create something that's significantly lighter than a comparable tube-to-tube (e.g. welded) structure.

The next step, I think, is to reintroduce the mechanical features (at least some of them) into the model *before* I remesh the surfaces. I would really want the mesh density to be created relative to the kinds and intensities of the forces that the part is going to be under: for instance, all of the bolts and clamp faces will want higher density meshes around them, etc. At the moment my best bet is to do that manually, by selecting areas I want to be at higher densities and just remeshing them to suit my intuition. 

More soon :)

A bet

Added on by Spencer Wright.

I *really* like bets. Not that I'm a gambler; I just like the idea that strong feelings be backed up by dollars on the table (note: this is related to my distrust of focus groups & user feedback in general). One of my favorite recent bets is Felix Salmon vs. Ben Horowitz on Bitcoin, and I'm always on the lookout for things I feel strongly enough to place a stake on.

Well last week, that chance arose. I was having coffee with Andre Wegner, and (as is our wont) we got to talking about the prospects for simulation of physical systems. I've been playing more and more with design optimization (and therefore FEA) software, and have increasingly felt that design automation is impractical, and will develop slowly (if at all). Andre is a technological optimist; he believes that an increasingly large amount of our design, testing, and optimization will be done virtually.

A concrete example arose: Andre believes that the field of computational fluid dynamics will progress quickly enough to make wind tunnels obsolete within our lifetime. I believe it won't.

So, the bet: 

If, in ten years (2025.09.18), wind tunnels are "still a thing," Andre owes me dinner. If they aren't, I owe him dinner.

I'm looking forward to this.