D3D_JUNEJULY25

Page 1


Creo 12 CAD system is now infused with more industry-leading composite capabilities, expanded Model-Based Definition (MBD), real-time simulation and streamlined workflows to make it even easier for you to deliver your best designs in less time.

EDITORIAL

Editor

Stephen Holmes

stephen@x3dmedia.com

+44 (0)20 3384 5297

Managing Editor

Greg Corke

greg@x3dmedia.com

+44 (0)20 3355 7312

Consulting Editor Jessica Twentyman jtwentyman@gmail.com

Consulting Editor

Martyn Day martyn@x3dmedia.com

+44 (0)7525 701 542

DESIGN/PRODUCTION

Design/Production

Greg Corke greg@x3dmedia.com

+44 (0)20 3355 7312

ADVERTISING

Group Media Director

Tony Baksh tony@x3dmedia.com

+44 (0)20 3355 7313

Deputy Advertising Manager

Steve King steve@x3dmedia.com

+44 (0)20 3355 7314

US Sales Director Denise Greaves denise@x3dmedia.com +1 857 400 7713

SUBSCRIPTIONS

Circulation Manager

Alan Cleveland alan@x3dmedia.com

+44 (0)20 3355 7311 `

ACCOUNTS

Accounts Manager Charlotte Taibi charlotte@x3dmedia.com

Financial Controller

Samantha Todescato-Rutland sam@chalfen.com

ABOUT

Summer is upon us, and while we’re all hoping that the sun will stick around for more than a few brief days, there’s still work to be done. For this issue of DEVELOP3D, we’ve put our time indoors to good use by checking out the exciting developments in visualisation and extended reality (XR), with both developing at a brisk pace.

There’s our guide to the best headsets for product design, for example. Almost all the HMDs featured offer video passthrough, allowing 3D models to be placed in real-world context, and the price/quality ratio on offer is now just as impressive as the images produced.

We also take a closer look at Onshape’s offering for the Apple Vision Pro headset – Onshape Vision Pro – and find that its cloud-native platform is an excellent launchpad for jumping in and out of XR.

Design studio Tangerine gives us a look at its work designing the new interiors and livery for the Japanese bullet train, in which it created cabins using Blender to allow clients to explore them in VR.

With AI renderings still in their infancy, we chatted to Chinese marketing agency Mediaman about its software, Jester3D. By offering hi-res assets and some basic shape tools, this gives Mediaman clients the ability to create visualisation scenes to guide campaigns, or simply produce social media content on demand.

Finally, our cover story this issue explores the incredible work done by studio Goodwin Hartshorn for Bowers & Wilkins. More than just gorgeous visuals (they created several thousand for the earbuds alone), it’s a showcase of where great research and engineering can take a design.

Sound good? Then read on – whether you’re indoors or out.

DEVELOP3D is published by X3DMedia 19 Leyden Street London E1 7LE, UK

T. +44 (0)20 3355 7310

F. +44 (0)20 3355 7319

Stephen Holmes Editor, DEVELOP3D Magazine, @swearstoomuch

Automate SOLIDWORKS manufacturing processes & sell digitally using DriveWorks

DriveWorks is flexible and scalable. Start for free, upgrade anytime. DriveWorksXpress is included free inside SOLIDWORKS or start your free 30 day trial of DriveWorks Solo.

DriveWorks Pro

30DAY FREETRIAL

DriveWorksXpress

Entry level design automation software included free inside SOLIDWORKS®

Entry level SOLIDWORKS part and assembly automation

Create a drawing for each part and assembly

Find under the SOLIDWORKS tools menu

One time setup

Modular SOLIDWORKS® automation & online product configurator software

DriveWorks Solo

SOLIDWORKS® part, assembly and drawing automation add-in

Automate SOLIDWORKS parts, assemblies and drawings

Generate production ready drawings, BOMs & quote documents automatically

Enter product specifications and preview designs inside SOLIDWORKS

Free online technical learning resources, sample projects and help file

Sold and supported by your local SOLIDWORKS reseller

examples

Set up once and run again and again. No need for complex SOLIDWORKS macros, design tables or configurations.

Save time & innovate more

Automate repetitive SOLIDWORKS tasks and free up engineers to focus on product innovation and development.

Eliminate errors

DriveWorks rules based SOLIDWORKS automation eliminates errors and expensive, time-consuming design changes.

Complete SOLIDWORKS part, assembly and drawing automation

Automatically generate manufacturing and sales documents

Configure order specific designs in a browser on desktop, mobile or tablet

Show configurable design details with interactive 3D previews

Integrate with SOLIDWORKS PDM, CRM, ERP, CAM and other company systems

Scalable and flexible licensing options

Sold and supported by your local SOLIDWORKS reseller

Connect sales & manufacturing

Validation ensures you only offer products that can be manufactured, eliminating errors and boosting quality.

Integrate with other systems

DriveWorks Pro can integrate with other company systems, helping you work more efficiently and effectively.

Intelligent guided selling

Ensure your sales teams / dealers configure the ideal solution every time with intelligent rules-based guided selling.

NEWS

Intel launches Arc Pro B50 and B60 desktop GPUs, PTC unveils Creo 12, InfinitForm attracts influential investors to seed round, and more

FEATURES

Comment: Andrew Bishop of Lightwave on HMDs

Comment: SJ on graduate recruitment challenges

Visual Design Guide: Foster + Partners’ Ori birdfeeder

COVER STORY Achieving the right fit in earbuds

D3D’s guide to the best HMDs for product design

Q&A: Onshape Vision with Greg Brown of Onshape

Hot seat: Getting data translation right at Adient

Lexus unveils its Black Butterfly cockpit control concept

Tangerine takes a fresh look at Japanese bullet trains

Runway-ready designs at New York Embroidery Studio

REVIEWS

First look at Jester3D from Mediaman

HP ZBook Ultra G1a / AMD Ryzen AI Max Pro

THE LAST WORD

While AI will certainly replace many existing tools and skills, product designers and engineers should remember that some knowledge is irreplaceable and some is transferable, writes Stephen Holmes

September 2023

INTEL EXPANDS ARC PRO FAMILY WITH LAUNCH OF B50 & B60 DESKTOP GPUS

» Built on Intel’s Xe2 'Battlemage' architecture, the new workstation GPUs promise a significant performance uplift and memory increase compared to their predecessors

Intel has launched the Arc Pro B50 (16GB) and Arc Pro B60 (24GB), two new professional desktop GPUs built on its Xe2 ‘Battlemage’ architecture, featuring Intel Xe Matrix Extensions (XMX) AI cores and advanced ray tracing units.

These new PCIe Gen 5 GPUs offer a big performance uplift and a significant increase in memory compared to the previous-generation PCIe Gen 4 ‘Alchemist’-based Arc Pro A50 (6GB) and A60 (12GB).

On paper, this makes the Arc Pro B50 and Arc Pro B60 better equipped to handle more demanding workflows, including larger viz scenes in applications such as D5 Render and Twinmotion, and AI tools like Stable Diffusion.

“The Intel Arc Pro B-Series showcases Intel’s commitment in GPU technology and ecosystem partnerships,” said Vivian Lien, vice president and general manager of client graphics at Intel.

“With Xe2 architecture’s advanced capabilities and a growing software ecosystem, the new Arc Pro GPUs deliver accessibility and scalability to small and medium-sized businesses that have been looking for targeted solutions.”

The Intel Arc Pro B50 (16 GB) is a lowprofile, dual-slot graphics card with a

total board power of 70W. It’s compatible with small form factor (SFF) and micro workstations such as the HP Z2 Mini and Lenovo ThinkStation P3 Ultra SFF, although no major workstation OEMs have yet confirmed support.

Priced at $299 MSRP, the Arc Pro B50 competes directly with the Nvidia RTX A1000 (8 GB) – but offers double the memory, as Intel executives are keen to highlight. It will be available from Intelauthorised resellers starting in July 2025.

The Intel Arc Pro B60 (24 GB) is a fullsized board with 120W to 200W of total board power. The power range given is broad because the GPU will also be available from seven board partners. Add-in board partners include (but are not necessarily limited to) ASRock, Gunnir, Lanner, Maxsun, Onix, Senao and Sparkle, with availability starting in June 2025.

The partners’ offerings look likely to come in many different shapes and sizes. For example, ASRock offers a passively cooled board, while Maxsun has one with two GPUs on the PCB.

Intel does not plan to set an manufacturer’s suggested retail price, or MSRP, for the Intel Arc Pro B60, but executives at the company currently estimate the value of the value to be around $500.

For the Arc Pro B60, Intel is going hard on its AI messaging, pitching it as an ‘inference workstation GPU’ that is ‘LLM inference ready’. According to an Intel spokesperson, inference workstations enable small-to-medium businesses to run and codify know-how and data locally, so it doesn’t get shared to third-party services.

The company claims its 24GB of memory gives the Arc Pro B60 a significant performance advantage in AI workflows over GPUs with 16GB of memory, such as the Nvidia RTX 2000 Ada and Nvidia RTX 5060Ti. “[The Arc Pro B60] runs significantly faster as soon as the model sizes start getting close to 16GB,” said an Intel spokesperson.

Intel has also announced Project Battlematrix, an internal code name for a new inference workstation platform providing up to eight Intel Arc Pro GPUs in a system and supporting up to 192GB of VRAM. According to Intel, this will allow it to run models with 70 billion parameters and more.

Finally, executives at Intel have also shared a future roadmap for the Arc Pro family. Later this year, the company will launch another Arc Pro GPU variant and to improve support for virtualisation with features such as SRIOV. www.intel.com

The

family continues to grow, with larger memory configurations and expanded software support

Intel Arc Pro

PTC CREO 12 RELEASE DELIVERS OVER 250 UPDATES TO USERS

With Creo 12, PTC has added 250 new enhancements intended to improve everyday use of the CAD system and boost user productivity. Integrated design, simulation and manufacturing capabilities have all been updated, on the promise of guiding users and teams more smoothly through the product development workflow.

A key update to Creo 12 is seen in its composite structure design capabilities, with tools that enable engineers to efficiently design, simulate and manufacture composite parts. These will enable users to achieve faster, more accurate creation of solid geometry from composite layers, create associative manufacturing reference models and merge plies from different zones.

The partnership between PTC and Ansys continues to bear fruit in Creo 12, with the goal of helping users iterate and optimise designs earlier in the design process. In this latest release, AI-powered generative design is provided for thermal optimisation studies, in addition to structural and modal analysis.

Creo 12 also sees an update to Ansys solvers (25R1), which should deliver simplified and improved results from both Creo Simulation Live and Creo Ansys Simulation.

Model-based definition is brought up to date and Creo 12 offers improved file export capabilities, including 3D PDF and STEP AP242, edition 3. Additionally, GD&T Advisor now supports datum reference features and intent surfaces, and annotations are easier to reuse. www.ptc.com

InfinitForm attracts influential investors

InfinitForm, creator of an AIpowered design optimisation platform for manufacturable performance-optimised CAD models, has recently closed a $12.7 million seed round and has attracted some influential investors along the way.

The round was led by UP.Partners and also included Schematic Ventures, Counterpart Ventures, Yamaha Motor Ventures and former Autodesk CEO Carl Bass among its participants.

The company has said that the funding raised will be used to expand InfinitForm’s engineering and go-to-market teams, further develop its AI-powered design assistant capabilities, and to support the company’s growing rollcall of customers across aerospace and defense, automotive and industrial sectors.

Toolpath boosts investment in CAM

Toolpath, a maker of AI tools to optimise CNC machining, has closed an investment round led by machine tool manufacturer Kennametal and existing partner ModuleWorks.

Toolpath’s software integrates AI into the CAM process, optimising tool selection and toolpath strategies.

The platform analyses machining processes and the parts that a process is aiming to make, in order to provide manufacturers with the insights they need to decide which jobs to quote, estimate costs accurately and optimise machining operations.

www.toolpath.com

HyperMill gains smart linking tool

Open Mind has announced a range of enhanced features for its HyperMill 2025 suite for traditional precision machining and the postprocessing of 3D printed parts.

A key addition is a new Linking Job tool in HyperMill Additive Manufacturing, which links multiple additive jobs with different technology parameters and 5-axis strategies to create an optimised workflow.

This includes an advanced 5-axis automatic tool orientation mode, for safe, efficient operations, even in tight spaces. www.openmind-tech.com

Lenovo updates AMD ThinkPads

Lenovo has updated its AMDbased mobile workstation line-up with the launch of the ThinkPad P14s Gen 6 AMD and ThinkPad P16s Gen 4 AMD.

Both laptops feature the ‘Strix Point’ AMD Ryzen AI Pro 300 Series processor which comes with integrated AMD Radeon 890M graphics and a neural processing unit (NPU) with up to 50 TOPS for AI workloads, fulfilling the requirements for Microsoft’s Copilot+ PC certification. Starting at 1.39kg, the Lenovo ThinkPad P14s Gen 6 AMD is the thinnest and lightest mobile workstation in Lenovo’s portfolio. Meanwhile, the Lenovo ThinkPad P16s Gen 4 AMD starts at 1.71kg for premium performance and portability. www.lenovo.com

Expanded model-based definition within Creo 12 aims to help engineers share accurate information with colleagues and suppliers

3D SYSTEMS UNVEILS PRINTER FOR HMLV MANUFACTURING

Executives at 3D Systems have announced the launch of the company's Figure 4 135 3D printer for highmix, low-volume (HMLV) manufacturing needs – situations in which a manufacturer is typically looking to replace or supplement injection moulding.

The 5-watt, 2716 x 1528 resolution printer, offering 50-micron pixel size, is said to deliver out-of-the-box accuracy and repeatability for production applications, while supporting the Figure 4 advanced materials line-up.

Developed in accordance with IEC-62443 cybersecurity standards, the Figure 4 135 also supports automated workflows, with a built-in barcode scanner to support batch run capability and full production process traceability.

3D Systems executives point to electrical connectors as an example of traditionally manufactured plastic parts that rely on injection moulding, which requires tooling and long lead times. By 3D printing precision connectors, they say, manufacturers can achieve high fidelity, high thermal stress resistance and cost efficiency at high volumes.

As part of this particular solution, 3D Systems is also introducing a new material, Figure 4 Tough 75C FR Black. This is a tough, flame-retardant material recognised by UL with a UL94 V0 rating at thin wall thickness (0.4mm) and a relative thermal index (RTI) for long-term electrical use of 150C and for mechanical use of 130C.

It is said to be suitable for use in appliances, consumer electronics and

in automotive applications that require accuracy, heat resistance, durability, flexibility and electrical safety.

The Figure 4 135 3D printer joins a growing Figure 4 family, including the Figure 4 Production, Figure 4 Modular, Figure 4 Standalone and specialist Figure 4 Jewelry printers.

Elsewhere in its stereolithography 3D printing portfolio, 3D Systems announced it is enhancing its 3D printed casting performance by making the QuickCast Diamond build style available with 3D Systems’ PSLA 270.

This projector-based SLA 3D printer combines high-speed production and mechanical stability to deliver mid-size components that are 30% lighter and offer more consistent strength than previous QuickCast build styles. www.3dsystems.com

The Figure 4 135 printer is said to offer a potential replacement for injection moulding

ROUND UP

Trinckle 3D’s Fixturemate software for 3D-printed fixtures and tooling is to be integrated with the GrabCAD Print Pro software from Stratasys. The aim is to enable manufacturing team members without CAD skills to design and configure custom fixtures www.trinckle.com

PTC has announced the release of Onshape Advisor, an AI-powered assistant that provides users with guidance on CAD workflows, PDM best practices and Onshape platform capabilities. It was developed using the Amazon Bedrock machine learning platform www.ptc.com

Ten years since Conflux Technology developed and patented its first heat exchanger using additive manufacturing, the company has announced plans to open a new business hub in the UK to serve clients from the European aerospace, automotive and motorsport sectors www.confluxtechnology.com

Materialise adds nTop integration to Magics

Materialise has unveiled its Magics 2025 release with nTop implicit geometries integration, as well as two next-generation build processors and new partnerships with Raplas and One Click Metal for AI.

The combination of Magics with nTop, the company claims, will help users overcome traditional 3D printing design challenges by reducing build preparation time from days to seconds while still maintaining design precision.

This new capability in Magics, combined with Materialise’s next generation of build processors, promises to make it possible for users to 3D print complex parts that were previously considered unprintable. In addition, executives at the company claim that the latest release’s extended BREP (boundary representation)

The UK Digital Twin Centre has opened in Belfast. The initiative has been delivered by Digital Catapult, funded by the Belfast Region City Deal and Innovate UK, and aims to add £62 million in value to the economy over the next decade by helping industry take advantage of digital twin technologies www.digitalcatapult.org.uk

CDG 3D Tech has opened a new UK demonstration facility for customers and prospects. Based in Basingstoke, Hampshire, it houses a wide range of metal, ceramic, plastic filament, pellet and resin machines, along with consumables, scanners and post-processing equipment www.cdg.uk.com

www.materialise.com

The latest VR/XR hardware is evolving fast, but other technologies need to catch up if designers and engineers are going to make the most of this brave new world ofcapabilities, writes Andrew Bishop, creative director at Lightwave 3D

Virtual reality, or VR, has been around for a long time. But early headsets were expensive, lowresolution and heavy, and the control systems on which they relied were unwieldy. There were wires everywhere. Even today, many headsets still suffer from excessive bulk and cost a great deal of money, especially in cases where users are determined to achieve the optimum speeds and resolutions necessary without being tethered. This frustrating situation, however, looks set to change.

First, what Apple achieved with its Apple Vision Pro was the required resolution – 4K per eye – with no wires and the ability to use hand gestures and movements in place of controllers. The headset is fast, fully immersive and non-fatiguing. It’s perfect for enterprise use, but priced at $3,499, it was never going to have mass-market appeal.

Now, headsets are emerging with similar specifications, but priced at below $2,000, and in some cases, below $500. They are super-light, truly innovative and deliver high-speed refresh at 4K per eye resolution. They are easy to set up and easy to use. Now everyone can have access, but product designers and engineers may be the people best-placed to get the most from the capabilities they offer.

NEXT-GENERATION HEADSETS

It will take time for uptake to accelerate. New features in our most-used software packages will be needed to get the best results for this new generation of headsets. Extended reality, or XR, involves extending reality with overlays, bringing your environment to life through XR/AR enhancement. From a 3D perspective, most of these types of overlays are created in 2D to keep memory overhead super-low.

As a result, in the short term, 3D packages may be creating these types of assets,

rather than fully immersive scenes, where the overhead would be too much for optical wear to handle.

For that to happen, I believe headsets need to take the form of sunglasses. And processing will need to be conducted on a secondary device, like a smartphone, and transmitted to the headset.

A second technology that will revolutionise the use of this type of headset will be transparent LCD, where graphics appear in your view but you can still see through them to the real-world scenario. Sadly, this is some way off. Currently, transparent technology doesn’t offer a dot pitch small enough to offer a sharp image via a pair of glasses. That said, the technology is advancing at a rapid rate.

By 2030, we may have the technology to enable this. The issue then becomes battery power requirements. A battery will not only need to be lightweight, but also support several hours of use.

Again, the next generation of solid-state batteries is around the corner, which promise to store lots of power and charge in minutes. As for transmission from a phone, this technology is already available so we’re already close to a situation where this solution will be workable.

At LightWave 3D, we’re working on superlow memory 3D geometry formats ideal for this type of product. We’re also working on next-generation real-time rendering technology, with our first implementation being Real-Time Preview Rendering (RiPR), included in our latest release, LW25. This technology is amazing, has a low memory footprint and will improve rapidly in the next two years.

It could potentially be embedded into these technologies, giving low-latency, low-memory Realtime HDR lit geo for use in Web, AR/XR and VR. New formats now include embedded real-time Alpha channel support, so floating objects fully animated with texturing and real-world lighting is fully possible.

To push VR/AR/XR forwards from an exciting concept to a daily habit, there are still a few milestones we need to hit

MAKING VR A DAILY HABIT

While LightWave has included stereoscopic rendering for at least 15 years, it now also incorporates a specific VR camera for easy set-up 360 rendering. This reflects a shift we are seeing towards VR becoming more prevalent across design, engineering and architecture, as well as media and entertainment.

To push VR/AR/XR forwards from an exciting concept to a daily habit, there are a few milestones we need to hit. Hardware has to evolve: we need sunglasses-thin frames, sub-200g weight, 4K-per-eye micro OLED and transparent LC displays. This should be powered by phone-side compute and batteries that top up in minutes.

Bandwidth needs to be invisible, so that nobody has to think about cables ever again. I believe a common, ultra-lean asset pipeline is essential – think glTF-next plus RiPR – so a single file streams everywhere, without polygon triage.

Finally, mainstream authoring tools need to treat VR/AR output as a firstclass citizen: one click to publish, instant previews, and APIs that allow designers to bolt XR onto existing workflows.

When optics, formats and software align, prices will plummet and wearing immersive tech starts will to feel as natural as putting on reading glasses. And that’s when mass adoption will take off.

ABOUT THE AUTHOR: Andrew Bishop has over 25 years of experience as an animator and studio director producing hundreds of CG shots for television and film. A member of the team that purchased LightWave 3D two years ago, he is heavily involved in developing new technologies for the software and toolset www.lightwave3d.com

With AI increasingly used in recruitment,

it’s tougher than ever for new graduates to get hired. But according to our regular columnist SJ, there’s still plenty they can do to increase their chances of landing the right opportunity

My first ever engineering class opened with a piece of advice that has stuck with me. The professor began her lecture with the following words: “I believe that only two things in life are ever guaranteed: death and taxes. You can work hard – and we value hard work here – but that does not mean every student is guaranteed an ‘A’ in my class.”

As graduation season approaches for the class of 2025, I can’t help but recall that lesson. How I wish I had understood when I graduated that one of the things not guaranteed by my hard-earned engineering degree was an actual engineering job.

And if I thought we had it tough when I graduated, the economic conditions for this current generation are even more challenging. Competition for graduate roles has never been fiercer.

AI and algorithms now enable companies to automate the filtering of applicants by a higher set of criteria. As a result, even if you have the skills and qualifications on paper, your resume may never get seen by a human recruiter if it doesn’t make it through that filtering process.

Additionally, many students are using AI to write resumes customised to each specific role and company. If you’re not using AI to game the system, you’re already falling behind in the race to be hired.

AIMING HIGH

So, what can you do if you still want to live the dream? First, keep in mind that, often, the game still revolves around the adage, ‘It’s not what you know, it’s who you know.’

So get out there and network! Join an engineering fraternity or society. My tip is to identify any clubs supported by your dream company, through participation on technical review boards, employee memberships, research collaborations, conference sponsorships, or university partnerships.

Don’t be afraid to get creative, either. If you have a dream company, find out where the company sports teams play, where staff grab coffee breaks, join the closest gym to their offices or find where employees drink after work. Basically, find a way to get chatting to people who already work there. That might sound cringe-worthy, or even downright manipulative, but take it from a person of colour: not all opportunities are created equally.

Many entries to opportunity come with barriers, such as socio-economic background, gender or ethnicity. Sometimes, your opportunity is on the ground floor, and on occasion, you have to be brave enough to crash-land through the glass ceiling to get past the door.

Also, protect your mental health. Handling rejection well is an under-utilised muscle for most of us. With the advent of AI-based hiring systems, applications can be screened and rejected at a much quicker turnaround speed. Rejection can affect your confidence, increase your stress levels and lead you to feel undervalued or overlooked.

That’s perfectly natural – but I’m here to tell you that there’s nothing wrong with you. This can certainly be a bruising experience, but it’s one that many people go through –and survive.

KEEP YOUR CONFIDENCE HIGH

Take care of yourself along the way. With every tenth rejection email that I received, I’d go to a cafe and order my favourite drink and then go on a walk to clear my head. Keep your mind occupied and confidence high. Reach out to former classmates to reminisce about favourite projects in university. Take on tasks that remind you of your engineering strengths, whether that’s practicing your CAD skills, repairing household appliances or writing code.

You’re a good candidate. Don’t let an algorithm written by some jackass consultant who hasn’t had to apply for a job in decades determine your worth.

When it comes to recruitment, don’t let an algorithm written by some jackass consultant who hasn’t had to apply for a job in decades determine your worth

And whatever you do, remember that AI is automating not just hiring, but also many of the roles for which new graduates have typically applied in the past.

When my professor said that not every student was guaranteed an ‘A’, she was prepping us for an important lesson. Sometimes, the number of opportunities for success are finite.

Students moving into entry-level roles in a post-AI world face a higher bar when it comes to the definition of ‘entry-level’. Employers now look for higher level skill sets that can’t easily be automated with AI. For example, they want entry-level engineers to be already skilled in Excel, SQL, or Python to help with coding and data processing. These are skills I had the luxury to learn while I was already on the job.

So, what can you do when you’re a human competing with an algorithm? Lean into that human aspect. Get strategic. Don’t be afraid to recruit AI onto your own team, in order to help you come up with a rock-solid strategy and develop creative ideas that help you stand out to employers.

And remember, coding skills may be helpful, but developing your interpersonal skills will always get you further, faster. Best of luck!

ABOUT THE AUTHOR: SJ is a metal additive engineer, aka THEE Hottie of Metal Printing. SJ’s work involves providing additive manufacturing solutions and 3D printing of metal parts to help create a decarbonised world.

VISUAL DESIGN GUIDE ORI BIRDFEEDER

» With its bold shapes and primary colours, Ori is a sculptural but practical birdfeeder, designed by Foster + Partners Industrial Design to bring a touch of architectural flair to the garden

VERSATILE DEPLOYMENT

Ori can be suspended from a tree branch, used as a floor-standing feeder, or as a standing water bath. Unlike traditional birdfeeders, it introduces a bold, graphic presence that can either contrast with or blend beautifully into natural surroundings, redefining an outdoor accessory as a sculptural statement

STAR DESIGNER

The project was instigated by Norman Foster himself, who personally provided sketches for the idea. The name Ori originates from ‘tori’, the Japanese word for bird

ROBUST BUILD

Composed of spun aluminium cones, stainless steel elements and threaded aluminium tubes, the vertical axis of the Ori locks together all components, ensuring stability and ease of maintenance

CLEAN EATING

Bird feeders can be a significant source of disease transmission, so Ori prioritises ease of disassembly and cleaning/sanitation

WEATHER PROOFING

The bird feeder is designed to protect feed from the rain, while simultaneously keeping it aerated via the perforated tray

DINNER TIME

A swappable feeding tube and cages can house a variety of food types – including seeds, nuts and suet – so that owners can respond to seasonal changes and visiting bird species in their garden

BATH TIME

The Ori’s water bath features an overflow drainage system, which directs excess water through a central tube to the ground

LISTEN IN

» For the designers at Goodwin Hartshorn, it’s solving the most technical challenges that often leads to the most beautiful results. Stephen Holmes speaks to the studio’s co-founders about their hands-on work crafting the latest luxury earbuds for Bowers & Wilkins

Bowers & Wilkins earbuds have received incredible reviews for their sound, comfort and aesthetics

LUXURY

Everyone’s ears are different. In fact, your own right ear and left ear may well be quite different from each other. That makes it extremely challenging to design a universally comfortable earbud.

Since 2003, design studio Goodwin Hartshorn has been developing products for luxury British audio brand Bowers & Wilkins, from loudspeakers to wearables.

A recent focus has been the development of two new pairs of earbuds, the Pi6 and Pi8 models. These mark a change in direction for Bowers & Wilkins from from its previous Pi5 and Pi7 models, which were developed by another design house. While the earlier models have been a commercial success for the company, the fit they offered was not perfect for every customer. This was an aspect of the product the company was keen to improve.

“I would say a large proportion of our process is about the surfaces, the design for manufacture, the aesthetics and so on,” says Goodwin Hartshorn co-founder Richard Hartshorn. “And it’s often critical that we solve some technical problems to almost release the industrial design, to allow the industrial design to progress.”

The Pi6/Pi8 project began with a deep dive into the ergonomics of earbuds, he explains. The agile nature of the studio’s design process, meanwhile, meant that a great deal of focus was also dedicated to understanding variances in ear shapes and sizes. The race was on to make wearables that would truly outshine the competition, not just in performance but also in comfortable fit.

This meant studying the shapes and dimensions of human ears, and in particular, the cavum concha (the section of an ear where the earbud nestles), as well as the ear canal and its angle relative to the cavum concha.

The design studio consulted a broad variety of data gathered around the world, including 3D scans of many ears, all of which helped them to identify ‘critical users’.

As Hartshorn points out: “There isn’t much point in doing a load of testing for people in the middle of the spectrum. You might as well concentrate on the people with the smallest ears and the largest ears, because they’re your critical users.”

Give these critical users a comfortable fit, he reasons, and you can be pretty sure you’re providing a comfortable fit for every other customer.

Studying the human ear led to a set of constraints from which initial CAD models were created in Solidworks. “We wanted to use Solidworks because it’s parametric. We knew there was going to be a process of refinement. Doing it parametrically would make those refinements more easy,” says co-founder Edward Goodwin.

PROCESS OF ELIMINATION

The goal in this project was to iterate on various design ideas within the specified constraints as quickly as possible, printing new forms of the earbuds every day using a Formlabs Form 2 3D printer.

Elements of these prototypes were refined using hand tools, helping the designers to home in on a design, but also throw in different concepts quickly. The process was rigorous. When the first 20 models failed to provide a comfortable fit for everyone, tests with critical users quickly accelerated to 120 models and beyond.

“We had various philosophies on how to develop the ergonomics of the buds to start off with – almost paths that we were following – and we were using these [critical users] to fi lter them out,” explains Hartshorn.

“So we kept narrowing down, and then, once we started to get these candidates, which were benchmarked against competitors, we could then run this out to hundreds of people.”

In fact, the project was not just about designing earbuds, but also about designing a process to develop the earbuds. “And that was really fascinating to work on! We could just see, day by day, week by week, the gains we were getting and the improvements.”

Goodwin adds: “We realised that we were getting onto something quite special, because more and more people were saying how comfortable the earbuds felt. Our engineering background really underpins us, and often we’re working from the inside out when we’re developing our designs. In this instance, we were working from the outside in. Having developed this ergonomic platform, we then moved onto developing the industrial design.”

GET PHYSICAL

Creating physical prototypes often helps the Goodwin Hartshorn team build objects that offer more than their initial function. Once in the hand, shapes become more than simple aesthetics. Even with mountains of earbuds 3D printed on the Formlabs machine, hand-sculpted foam double-scale models were still used to try and work out delicate features and balancing the shapes.

“We strongly feel you should be [building physical models] before you create your first CAD model, because

WATCH GOODWIN HARTSHORN’S DEVELOP3D

Learn

● 1 The top-of-therange Pi8 earbud is designed to fit perfectly inside the ear while reflecting luxury on the outside
Go behind the scenes at Goodwin Hartshorn with
‘‘
Often, we’re working from the inside out when we’re developing our designs. In this instance, we were working from the outside in Edward Goodwin, co-founder of Goodwin Hartshorn ’’

as soon as you create the CAD model, you’re stuck in a reality that has been created on a digital screen, where your sense of scale is all over the place,” says Hartshorn.

“Having said that, I think we’re at a point now where you can often create CAD models much more quickly, so you can 3D print things and so on. That equation is changing. Often now, it is more about hacks.”

And there is no shortage of ‘hacks’ on display at Goodwin Hartshorn’s design studios in Deptford, South London, as the co-founders open boxes and boxes of early prototypes from other projects. These include wood and card models, handles and touchpoints carved from foam, built up with modelling clay or with pieces of milled brass added.

A prime example is the studio’s work on a surgical stapler, whereby it evolved an existing tool to not only incorporate better ergonomics, but also to make the entire process of loading it and priming it far more intuitive for the user.

COLOUR, MATERIAL, FINISH

Incorporating the physical alongside the digital is also key for developing the CMF (colour, material, finish) aspects of a design. In the case of the Bowers & Wilkins Pi6 and Pi8 earbuds, a wide range of possible colours would need to be considered, in order to maximise the product’s appeal for a global audience.

For Goodwin Hartshorn, collaboration software Miro acts as an infinite whiteboard where renders and lots of other imagery and notes can be added. The earbud project involved adding over 2,000 Keyshot product renders to its Miro canvas, helping the team work down to the final four colourways for the two earbud models, while the whiteboard even helped them to explain how they made the decision to the client.

“Previously we used Maxwell Render, but where renders were taking hours and hours, with Keyshot they’re taking seconds,” says Goodwin. “We might be doing renders that take 10 seconds, but that’s enough time for us to simulate different colours and finishes on the buds.”

“It can be that you’ll get a real, physical product, and you’re just Dremelling out a little bit, splicing in a bit of wood or brass, just to prove a point,” says Hartshorn. But putting it into the hand of the client next to a render on a screen helps the team imagine a finished product more easily. “It doesn’t lie. There’s no smoke and mirrors to it. There’s a physical object. And if they can use that physical object, you know you’ve proven the point.” ● 2 Physical stacks, representing product CMF, help convey designs to stakeholders ● 3 3D printed models help ensure the tactility of products as well as the packaging of key electronics

Despite the high fidelity of the Keyshot renders, the Goodwin Hartshorn co-founders feel there’s still a need to communicate colours and materials physically with clients and manufacturers. In situations like boardroom presentations, or ensuring factories and suppliers can produce the finishes accurately across multiple materials and processes, having physical samples helps remove any false assumptions.

Hartshorn explains that a key reason is that everyone’s computer screen and calibration is different. Big boardroom presentation screens are typically among the worst for displaying colour, saturating it in weird ways, so that everybody ends up with a different sense of what that colour actually means, states Hartshorn.

To conquer this, the team have CMF plaques produced in the exact colours and roughly proportionally to the amount of surface the finished product would display. Some are tubular, allowing the light to reflect with a curvature more natural to that of an earbud, while other plaques have gentler curved surfaces alongside metallic tubes, better for explaining headphone touchpoints.

Hartshorn explains that these stacks of colours, materials and finishes are brilliant for working out combinations of colours, particularly when the final industrial design is still to be signed off, but the studio wants to demonstrate to stakeholders how it will look when combined.

“You then subsequently start making decisions about how polished you want the arm to look – you don’t necessarily want it to look too ‘in your face’ – and you’re also trying to balance things or choose one element to stand out or not. So it’s about setting levels,” he says.

Each sample stack is nestled in a small black jewellery box with all the relevant data, such as Pantone numbers. Goodwin continues to talk while pulling more from the drawers of a long storage cabinet: “You’ve got the renderings there, but then when you get these in your hands, you can tell which really works!”

There is some prototyping magic involved along the way, he explains, with soft fabrics and leather often slow and costly to be dyed accurately in small quantities, so instead, a casting is used of the material and sprayed to give a realistic finish.

BRIGHT SPACE, BRIGHT IDEAS

With more than seven boxes made for each colourway –one for the studio’s own office, one for Bowers & Wilkins offices, another for its factory in China, and the rest for the suppliers of other elements – two huge desks located in the studio can quickly be overtaken by samples and prototypes.

“You never focus down immediately on the final variant in terms of colour,” says Goodwin. “You’ve got two or three that get launched, and you’ve got some extra ones that come out [down the line]. But in order to develop the extra ones and the original ones, you’ve got a broad range. So we have loads of these different colours that never see the light of day.”

Which is to say that more products are already in development. In fact, during our visit, the latest over-ear Bowers & Wilkins Px7 S3 headphones are released.

Goodwin Hartshorn’s ergonomic study of ears allowed for rapid iteration of earbuds using its in-house Formlabs 3D printer

6

Entering Goodwin Hartshorn’s bright office space in Deptford, South London, visitors quickly encounter a striking line-up of tall Bowers & Wilkins floorstanding speakers, curated furniture, and walls covered in shelving filled with examples of the company’s client work and sources of inspiration.

A long, modular system of USM Haller storage drawers in golden yellow run along one wall, neatly housing box upon box of physical prototypes. Due to the range of projects and products that Goodwin Hartshorn works on, hands-on

physical prototyping is a key part of the process, influencing the designers’ troubleshooting and ideation.

Aside from Bowers & Wilkins, the company has developed a wide range of products for other companies, from precision surgical staplers for major medical brands to fun kitchen utensils for Joseph Joseph.

Among the archived boxes filling the drawers is a set of prototypes for a cutlery project. Both the founders have a clear love for form and function of knives, forks and spoons.

Cutlery, according to Goodwin, “is really fascinating in the same way as, I guess, chairs, lighting and fonts. The function of the object is just so assumed that what you get is the pure design. You almost don’t clock that this is a method for passing on information. I think cutlery is similar.”

Much of the design for this cutlery set was hand-sculpted, in order to attain the fine details of each angle, the depth of the spoon’s ‘bowl’, and to embody the angles of a ‘superellipse’ throughout the set. The prototypes were built

The original Joseph Joseph Garlic Rocker was designed by Goodwin Hartshorn and has inspired hordes of imitators

Naturally, that prompts the co-founders to show me a pair of prototypes made while the new headphones were being designed. They highlight, for example, the slimmed ear cup, which has the effect of reducing the space that exists between the arms of the headband and the wearer’s temple.

This tweak to the ‘temple gap’ makes the headphones more comfortable and the experience of wearing them less noticeable for the user. The choices of colour, finish and details, meanwhile, speak volumes about the studio’s luxury, high-end ambitions for the product.

Once again, we see here a blend of excellent function and detailed aesthetics that is comfortably a calling card of Goodwin Hartshorn. www.goodwin-hartshorn.co.uk www.bowerswilkins.com

using metals 3D printing before being post-processed, and they line up beautifully on the desk.

Says Hartshorn: “The shape of a knife is fundamentally different, for good reasons, from that of a spoon. They’re doing two different things, as is a fork, which is going to stab stuff. You’re trying to develop an identity for them that is harmonious, yet which respects the different functions.”

More prototypes are produced, this time showcasing Goodwin Hartshorn’s work for Joseph Joseph – an early client, thanks to a recommendation from designer Mark Sanders, their

former tutor at the Royal College of Art. Sanders is himself a designer of iconic Joseph Joseph products, such as its Chop2Pot folding chopping board.

An early brief was for a device to crush garlic. “That was the brief: it just needs to stand out,” laughs Goodwin, as a box of prototypes is unpacked.

The initial concept was a tube in which cloves of garlic were placed, with a plunger to push the garlic up against a grate, crushing it. However, the force needed to push the garlic made this a two-handed job, which led to the designers adding arms to the

design, along with a rocking action to help the task along. In this way, the Garlic Rocker was born. From this functionality, it then became about further honing the design and surfaces, adding curves and recurves along with different thicknesses to make the rocker more sculptural, as well as easier to hold and use.

Once pressed, the designers realised the garlic sits on top of the grate and can be tipped straight into a pan, while its stainless-steel construction means it can be used as a garlic soap, in order to remove the odour from your hands.

These forms were developed with hand-sculpted foam models and then built in 3D CAD using Solidworks. Having overcome some initial manufacturing challenges, the Goodwin Hartshorn Garlic Rocker for Joseph Joseph proved a great success and has become an icon among kitchen utensils. Yet its success has attracted masses of copies, some of which occupy a dedicated shelf in the Goodwin Hartshorn office.

“It was surreal for us!” laughs Hartshorn as he works his way down the line of impersonators. Together, they form a timeline.

Some early copies are just basic versions of the Garlic Rocker, built from cheaper materials. Later copies incorporate pseudo functionality such as bottle openers or slicers. More recently, larger brands have launched their own variants, approaching the product in the same way as they might an age-old, established utensil.

What’s key is that none of these copies can rival the original, either in terms of form or function. However, it’s a striking reflection of the market forces that successful designers everywhere must combat.

Copper Cooks Up Energy-Efficient Range Alternative

With sustainable options needed now more than ever, the appliance industry is prime for disruption. The transition from gas stoves is accelerating, not only due to health concerns but also as part of a larger effort to reduce reliance on fossil fuels. One company is tackling these challenges head-on. Meet Copper, the world’s first battery-equipped induction range that is heating up the ultimate blend of sustainability, high-end design, and accessibility to consumers.

Designing for energy efficiency

Besides Copper’s eye-catching design, one of its biggest appeals is its consumer-friendly option for energy efficiency. “Many people don’t realize how expensive and difficult it is to retrofit their home for a traditional electric stove,” says Mitch Heinrich, principal designer, Copper. “With Copper, you can just plug it in, and you don’t have to rewire your house.”

Unlike traditional electric ranges that rely on an intermittent significant power draw, Copper is energy storage equipped, storing power in an internal battery and using it strategically. Copper charges its battery when electricity is cheapest and cleanest, typically during midday when solar or wind power is abundant. Then, during peak hours, the stove runs entirely off its stored energy without any

sacrifice to performance. This intelligent system is controlled onboard the range once users connect their stove to Wi-Fi and let it automatically manage energy use. The Copper range can cook 4-6 meals on battery power if the power goes out and has been tested to cook a traditional Thanksgiving meal for dozens of people while plugged in.

“There are grid utility data resources that show where the energy is coming from at any given time,” Heinrich says. “By analysing these trends, we ensure that Copper is charging only when it makes sense, reducing both costs and carbon footprint.”

Creating Copper’s signature look

For the design of Copper, Heinrich enlisted the help of Michael DiTullo, who is renowned for his more than 20 years of industrial design work for companies as large as Nike, Intel, Honda, and Google and for small, innovative start-ups. They took a collaborative approach for the design of Copper, focusing on crucial details like the brow plate, knobs, and oven handle brackets to create a signature look.

“We felt like the brow plate [the front leading edge of the stove] was an area with an opportunity to provide a little bit more character,” Heinrich says. “As a single component, it sets the tone for the whole product. If you look at most ranges, it’s a stainless steel, rectangular box. That was the first

meaty industrial design project that Michael and I collaborated on using Fusion.”

For these design components, the workflow process between Heinrich and DiTullo was dynamic and highly iterative—all with the use of Autodesk Fusion.

“Normally, there’s a rigid process with statements of work, but with Mitch, it was just like, ‘Hey, we need to look at this knob again,’ and I’d send 20 new ideas in a couple of days,” DiTullo says. “Then, Mitch added four more in the same file in Fusion to collaborate. Tools such as Fusion are enabling designers to work in a more organic way.”

They also took a unique approach to materials, opting for knobs made from black walnut wood reclaimed from a barn in Sonoma, California.

Reclaimed wood reduces the need for new lumber and deforestation, minimizes waste, and lowers the carbon footprint. Heinrich personally machined the first 150 wooden knobs using Fusion’s CAM tools, after struggling to find manufacturers willing to take on the complexity of his design.

“Most CNC shops wouldn’t take on the project,” Heinrich says. “But with Fusion, I was able to machine them myself before working with a trusted partner who now manufactures them.”

Using Fusion end-to-end

Throughout the design and engineering process, Fusion played a critical role in allowing the team to work quickly and efficiently.

“For me as a designer, a core part of who I am and how I work is based on physicality and making things real in the world,” Heinrich says. “A key

element of Fusion is the parametric CAD coupled with CAM functionality. I can work very fluidly from concept to physical prototype, make a tweak, and then go back to physical prototype. That back-andforth allows me to end up with the best solution. The friction is so low to make those tweaks and to really find that optimal endpoint.”

“Fusion has a bit of everything—solid modelling,

surface modelling, CAD/CAM tools,” DiTullo adds. “It’s not about being locked into one rigid software. It’s about flexibility, which is exactly what we needed to make Copper a reality.”

Moving forward

As Copper makes its way into homes across the country and with a new version underway, it’s

redefining what an electric range—and appliances—can be.

“The future isn’t about some impersonal, hightech kitchen full of gadgets like The Jetsons,” Heinrich says. “It’s about making the home better in ways that are meaningful, practical, and sustainable. And that’s exactly what Copper is all about.”

FusionislikeaSwissArmyknife. Youcanhavethisinterchange betweennumerousstakeholders onanygivenproject.It’sthis translatorandcentralhubtaking filestoimportorexportfromany program.Fusionissupervaluable.”

THE BEST HEAD-MOUNTED DISPLAYS for PRODUCT DEVELOPMENT

» Welcome to DEVELOP3D’s round-up of the best head-mounted displays that deliver total immersion and incredible fidelity for exploring and showcasing 3D models

Sony and Siemens first unveiled their joint creation, the SRH-S1, at the Consumer Electronics Show (CES) in January 2024

In our last headset round-up, headset-mounted displays (HMDs) were just beginning to emerge that were clearly streets ahead of their predecessors – faster, simple to use, and more comfortable to wear.

Since then, HMDs have made further advances. The headsets featured on this list are all lightweight, built for many hours of comfortable wear, and offer unprecedented levels of colour and clarity.

It’s also notable that almost every HMD listed here offers video passthrough. Combined with increased refresh rates, this allows 3D models to be viewed at true scale and in situ, as opposed to within the confines of a VR box. Pleasingly, these benefits also help eradicate any symptoms of motion sickness during headset use.

Weight has been reduced dramatically. Balance has been tuned. Straps have been redesigned and colour quality has skyrocketed. If you haven’t done so already, it’s time to consider an enterprise use case that will see HMDs integrated with your product design and engineering workflows.

A word of warning: do watch out for accessory costs! Battery packs, cables and other accompanying kit can really set you back when purchased as add-ons.

1

SONY X SIEMENS XR HMD (SRH-S1)

The result of Sony’s collaboration with Siemens, the SRH-S1 is an ‘immersive spatial content creation system’, which was unveiled at CES in January 2024 and became available for purchase over a year later. Two handheld controllers – a stylus and a ring – make clicking, pointing and zooming in on an object feel intuitive. Plastic components reduce any extra weight on the user’s head, and the well-balanced front and back sections prevent the sensation of tipping forwards often associated with front-heavy weighting.

The XR HMD is compatible with Siemens NX X cloud-based software. This enables users to work with the software in a browser window, side-by-side with a live 3D render of their design, enabling editing and examining of that design in real time. Different buttons on the controllers enable different tools.

» Tethered

» Passthrough: Yes

» Price: From £3,669

www.sw.siemens.com

2

APPLE VISION PRO

The Apple Vision Pro is intended for entertainment, and typically used to play video games and watch films, television shows and sports. But this headset also boasts a customisable workplace with ‘infinite space’, which connects to a Mac using the Mac Virtual Display tool. Apple keyboards, trackpads and other Bluetooth accessories can also be connected –although sadly not the Apple Pencil –and apps can be used with colleagues in real time using FaceTime. Screens can be placed anywhere in the Vision Pro’s display, with passthrough allowing customisation of an office or home space.

allowing customisation of an office or

glasses, Zeiss optical inserts are

Untethered

Apple sells various accessories designed to maximise productivity and portability. The battery, travel case and light seal will set you back £199 each. For those of us who wear glasses, Zeiss optical inserts are available, starting at £99.

» Passthrough:

» Price:

Yes

From £3,899

META QUEST 3

The most affordable option on our list, the Meta Quest 3 offers significant improvements over the previous Quest 2. A more powerful processor and higher resolution improve the user experience. Pancake lenses make the entire device slimmer, lighter and more comfortable to wear. Since it offers ten times the number of pixels in passthrough when compared to its predecessor, blending the real world and virtual world feels far more natural.

The Meta Quest for Business subscription service enables corporate users to control and maintain the multiple headsets that might be needed by a team. Users can design their own avatar and use these when meeting in a virtual office - useful for teams spread across multiple locations. Improved Touch Plus controllers make fine-tuning 3D designs feel more natural than before. 3

» Untethered

» Passthrough: Yes

» Price: From £469 www.meta.com

VIVE FOCUS VISION BUSINESS EDITION

Designed specifically for business use, Vive claims that the Vive Focus Vision offers its “most immersive spatial computing experience yet.”

5

A built-in depth sensor makes passthrough feel natural, and the Vive Streaming Kit, sold separately, makes VR graphics visually lossless. The headset can be monitored and updated remotely using the Vive Business+ device management solution, making managing remote teams easier.

Magic Leap 2 optimises passthrough with transparent eyepieces and is designed to be ergonomic, offering a comfortable fit even after an entire day of wear. The computing and battery units are tethered to the device in a separate pack, reducing the weight of the headset. A dual-CPU configuration means that one processor in the headset manages lighter tasks, while another more robust processor is in the separate pack, ready to handle more demanding tasks.

multiple people at the same time in a

Alongside designing and 3D visualisation, which can be used by multiple people at the same time in a remote collaboration context, the Vive Focus Vision can also be used for training and simulation, including location-based experiences and manufacturing.

» Tethered or untethered

» Passthrough: Yes

» Price: £1,296 (Headset at £999 plus two-year business warranty and services pack) www.vive.com

Untethered

Magic Leap 2 is built on an open platform, meaning that apps can be ported from other devices and adapted for use. Its 3.5 hours of battery life is higher than that offered by other headsets on this list and includes running AR apps without distortion or lag.

» Yes

» Price: be ported from other devices and

Passthrough: From £3,337 www.magicleap.com

MAGIC LEAP 2

6

VARJO XR-4 SERIES

The XR-4 series from Varjo boasts an expanded field of view, spanning 120 x 105 degrees, as well as passthrough cameras that mimic the human eye with an XR gaze-driven autofocus camera system. The headset is compatible with workstations powered by Nvidia GPUs and running software that includes Nvidia Omniverse, Unreal Engine, Autodesk VRED and hundreds of others. Unlike some headsets, which require the purchasing of prescription lenses, the XR-4 series can be worn with glasses. Noise-cancelling microphones improve group collaboration sessions.

There are three variants in the series: the XR-4; the XR-4 Focal Edition, which features enhanced passthrough with autofocus cameras; and the XR-4 Secure Edition, which is designed for secure training environments and is TAA-compliant.

» Tethered

» Passthrough: Yes

» Price: From €5,990 excl. tax www.varjo.com

MEGANEX SUPERLIGHT 8K

The MeganeX weighs just 179 grams and boasts a ‘near-zero facial pressure’ design, reducing the risk of facial marking and discomfort or redness after long periods wearing it.

The headset does not offer passthrough, but its physical flip-up mechanism allows the user to pause work and return to their real-world surroundings without needing to remove the headset.

Headphones and other devices can be connected via a USB-C port. A dongle for connecting controllers is included, as is a useful handheld pole adapter, which allows for quick VR experiences without needing to wear the headset, similar to the experience of looking through a magnifying glass.

handheld pole adapter, which No

» Tethered » Passthrough: » Price:

Available for pre-order from £1,599

Engineered ‘exclusively for enterprise’, the ThinkReality VRX offers high-resolution passthrough for mixed reality simulation. Compared to previous generations, this new headset is slimmer for comfort, while pancake lenses reduce weight. The battery is placed at the rear of the headset to act as a counterweight and make the design more ergonomic. A venting system channels heat generated by the display away from the user’s face to maximise comfort.

Intended for soft skills training, the VRX is accompanied by a suite of end-to-end services with included support. It can be used as a standalone device or connected to a workstation for wireless or tethered streaming. 8

» Tethered or untethered

Passthrough: Yes » Price: From £949

NEW PERSPECTIVES

» More than one year on since its release, the Onshape Vision app from PTC is opening up new modes of visualisation and collaboration

for designers wearing the Apple Vision Pro headset. Stephen Holmes sat down with Greg Brown, VP of product management at Onshape, to talk about where extended reality might prove most useful for designers and how best to take advantage of its powers

Q: How do you view the uptick in designers and engineers using VR and XR, and what’s Onshape’s approach to addressing this trend?

A: Repeat usage is now much easier to encourage. In the old days, VR was cool once, but you didn’t really feel like you ever needed to do it again. Typically, a CAD workfl ow involves a lot of back and forward between text input, as well as very precise mousing and other things, and our customer base hasn’t really resonated with sitting with their arms up in the air, doing ‘conductor’ actions.

So we’ve focused more on the collaboration, sharing and mark-up that keeps design reviews moving, trying to shorten that time, especially with distributed design teams. These are common within our customer base and a few have specifically gone out and bought head-mounted displays [HMDs] so that they can bring teams together a little bit more.

Q: What are some of the benefits for designers that you feel the Apple Vision Pro technology offers?

A: We’re using the super capabilities of Apple’s gaze and environment detection and all these other things. That’s the special sauce for the Apple Vision Pro!

The newest thing is what we call ‘true support for spatial computing’. Instead of running the Onshape Vision app alone, you can actually shrink that down into a small transparent cube space that you can put anywhere alongside you – next to you on your desk, for example –and can continue working in your other Apple Vision Pro apps while it stays up-to-date in real time.

● 1 Onshape Vision runs exclusively on the Apple Vision Pro ● 2 XR enables large, complex products to be brought into the design space

3 A moving prototype at true scale allows for more natural problem-solving and error-spotting

● 4 Passthrough means that realworld objects can be assessed alongside virtual ones

It’s pretty incredible to use, because you can then bring up Safari and have Onshape in a browser window. You can also have Arena in another browser window, and you could be doing CAD and PLM integration, all while you’ve got a full 3D interactable model right next to you on the desk.

Q: The Onshape Vision App solely supports the Apple Vision Pro. What was the thinking behind this decision?

A: I’ve used AR/VR/XR devices all the way back to the last century, and the Apple Vision Pro [AVP] marks an incredible, order-ofmagnitude change from other devices: the fi eld of view, the clarity, the way that you can work with gestures rather than pistol-grip devices in your hands.

The other thing is that it uses the environment around you to light a model. Nice refl ections, realistic lighting eff ects and real-time, correct shadows – that’s an incredible thing that your brain will pick up on very, very quickly. And it’s comfortable for long-term use.

Onshape has been working with Apple from the very beginning. The app was launched on the opening day of the Apple Vision Pro and has undergone iterative enhancements since then.

We had obviously been working with Apple beforehand for some period of time. And it was very good to have that very close relationship with the company, which continues today.

used AR/VR/XR on the desk.

Q: Where in the product development workflow does it fit best? together a little bit

I could be looking at some 2D representation, like a table of numbers, which is feeding in to create the 3D data, and I can manipulate the 2D stuff and see the 3D model change at the same time in the CAD browser window and in the 3D visualisation model.

Q: Does being cloud-native give Onshape advantages when it comes to XR?

A: The special, almost magical parts of the Onshape Vision app are only possible because of the 100% cloudnative nature of the Onshape platform. You’re looking at the latest data, it’s dynamic and it’s real-time.

The Onshape platform is unique in that it’s this true single source of data. You don’t need to save out in some glTF format or other kind of exported format and re-upload to another software to visualise it. It’s a very natural way to continue what you were doing in Onshape in a browser.

A: You can evaluate different designs very easily. Your team can be on an Onshape session, changing configurations, changing dimensions, and you see the instant update in real time. To have a model that’s got multiple configurations, like a robot with different end effectors, you can evaluate those differences in real time.

I’ve spent a lot of time in industrial design, and evaluating the flow of light over surfaces is a really interesting use case. It’s that good! The quality of the environment, mapping and reflections and materials you get, is so good that an industrial designer can get a real sense of how the light is going to flow.

You’re also able to invite other people to share your session. We support Apple’s Share Play. You basically place a FaceTime call [while using the AVP] to somebody who’s not using a device, and they can see what you’re seeing. They could be on the other side of the world – and then suddenly, you’re collaborating.

If you’ve got a second user who’s got an AVP as well, then it’s even better, because you can both be manipulating the design together. It’s really quite compelling to be able to pick one part out of the assembly and pass it to your collaborator. It works very smoothly, because the AVP treatment of gestures is so accurate. It tracks your hands incredibly well.

Q: You mention using Onshape Vision for marking up documents. How does that work in a virtual environment?

A: You can be looking and pointing at something, and then simply add a comment. Then that comment will be saved in that document the same way as if you had put that comment in using a browser or mobile device, and that’s in real time. I use voice-to-text all the time to make that comment. It’s a bit easier than typing.

Q: That’s a great tip – have you any others for the AVP?

A: Currently, you can bring up a Safari browser inside the AVP or you can mirror your screen. With the latter, I can be using my MacBook, but the display is virtual in front of me. This way you get a really high resolution – an 8K screen – much better than looking at Safari. Also, this year, we included the ability to visualise your

Onshape models in synthetic environments [VR], as well as your real environment [XR]. So we have six different environments that vary from a showroom to a factory. There’s also a fun one – the cargo bay of a Star Destroyer – which gives really nice lighting effects. It has strip lights, a space vista outside, and there’s a flat, industriallooking floor. So, if you’re evaluating products that have lots of curves, it’s great for that.

We don’t allow people to upload their own synthetic environments, but that’s something that we could probably imagine doing in the future.

www.onshape.com

GETTING COMFORTABLE WITH DATA TRANSLATION

» With a worldwide customer base and 230 manufacturing and assembly plants in 32 countries, automotive and aerospace parts supplier Adient is reliant on accurate data translation to ensure its seating is produced precisely

Designing seating for the automotive and aerospace sectors requires close attention to ergonomics. A deep understanding of how human bodies behave at rest and in motion and the way they interact with machinery is necessary to create a seat that offers optimal comfort, safety and wellbeing.

As with the other key components in a vehicle, a complex mix of factors – such as material stresses and durability, strength, noise and vibration, and weight –are all important considerations for seating engineers. Get the balance right, and the seat becomes a key touchpoint likely to help sell that car or leave passengers feeling positive about their fl ight.

Adient, a global automotive and aerospace seating and interiors specialist, produces whole seating systems, from fabrics to frames and from trims to tracks, and supplies almost every major automotive company in the world.

Automotive OEMs often choose to divide components between one or more seating suppliers in order to maintain a diverse supply base. In order to lower costs and promote part interchangeability, Adient practices a high degree of standardisation and modularity, a strategy that also works well for the car companies, spreading parts and systems throughout their own vehicle platforms.

At the same time, Adient must manage a market and manufacturing workflow full of variables that need to be streamlined. Data relating to the design, test, production and review stages must all work well together, since any

out-of-sync data could negatively impact product quality, as well as manufacturing speed and cost.

LANGUAGE CHALLENGES

That’s a tall order, given the wide range of companies that Adient supplies. “All the OEMs we serve use different CAD systems—or varying versions of those systems,” says Ramanamurthy H Pentakota, global director of IP, technical services and operations at Adient. “They also have their own types of certifications, specifications, and how they want to fit this or that annotation to which data layer.”

manufacturing instructions to be preserved as intended and the Asian versions of Catia to be validated to their NX and Catia masters.

This standardises resources in hardware, software, skilled designers and best practices in a single system, while still achieving deliverables in multiple other systems and flavours.

To date, Pentakota estimates that Elysium tools have saved Adient at least 10% in data preparation and iteration costs, and 10% of costs related to automated validation, adding, “This is very significant!”

● 1 Well-designed seating can help reduce vehicle weight and improve fuel efficiency

● 2 Seating frames are a major factor in reducing noise and vibration and improving comfort

● 3 Comfortable, stylish seating positively impacts customer propensity to buy a specific model

With this in mind, a clear process is needed to support collaboration between 5,600 employees, spread across 12 core development centres and connected to more than 230 manufacturing and assembly plants across 32 countries.

“Generally, there are no uniform standards across all the different supply chains, because OEMs follow their own standards, which suppliers must then adapt to fit,” says Pentakota.

At Adient, he adds, fidelity to the company’s own Integrated Model Creation Process offers a clear path forwards.

“We try to standardise on a single CAD platform in-house, specialise in two more, and then faithfully recognise the original data from another 25 to 30 different CAD and analysis environments that send information to us,” he says.

In order to support better interoperability for its design and PLM workflows, Adient engaged with Elysium and uses its software platform, which enables it to translate and package all data for different OEMs and suppliers.

“The validated modelling data that Elysium facilitates is a key part of our integrated process approach to ModelBased Definition (MBD) and a Model-Based Enterprise (MBE),” says Pentakota. “Some time in the near future, a robust, all-digital, MBD/E environment will help us better integrate the mobility devices, sensors and mechanical systems required for autonomous vehicle development. That’s our direction.”

DATA PREPARATION

TRANSLATING THE FUTURE

Shorter engineering development cycles are moving the automotive industry away from 2D toward 3D MBD/E processes that serve end-to-end automation from design to manufacturing and marketing.

“Robust translation and validation of source information – and insight into that data – is what will keep Industry 4.0 and true automation on track,” says Pentakota. By embedding rich data, such as all the combinations of processes that create a drill hole, with materials and suppliers, costs, and other profiles, he explains, will take GD&T to the next level of product intelligence.

“Manufacturing process intelligence is baked into the CAD model, so that machine software picks that up and actually does the programming, optimised and without error.”

With smart seats for autonomous vehicles in the planning, Adient is already thinking about feedback loops that report on posture and ergonomics and support the use of e-commerce in improving the passenger experience. Translating this data for designers is where innovation will accelerate and where comfort for future passengers sits.

www.elysium-global.com

Naturally, the best seat designs and components find common use across automotive firms. In fact, one company might even specify a system used by a competitor.

There may also be several suppliers of seating fabrics and foams that need to integrate their products with Adient’s metal structures, typically back frames and track assemblies. The complexity traditionally surfaces in the exchange of data, where issues of geometric dimensions and tolerances (GD&T) and assembly fit come into play.

For example, one company might produce source data and require deliverables in NX, while the Asian assembly plants for each OEM may work in three different versions of Catia. Producing an API-based Master Model via Elysium in both NX and Catia allows the original geometry and

BLACK BUTTERFLY

» Lexus is looking to push the boundaries of human-machine interaction with its new cockpit control concept for seamless driver interaction

Drawing crowds of admirers at Milan Design Week, Lexus recently unveiled its reimagining of driver controls – the new Black Butterfly user interface.

With more than 2,000 exhibitors filling all available spaces in the design hub of the Italian city with clothes, perfumes, furniture and endless other luxury products, the Japanese marque opted for a more subtle reveal at Superstudio Piu. The dark, minimalist studio space ensured that attention was focused on one thing only: the gently glowing car, positioned in the middle of the room.

The Lexus LF-ZC concept car was chosen to host the Black Butterfly dual-interface cockpit control system. The goal here, meanwhile, is to bring human and machine closer together through synchronised interactions.

The car itself was lit from within, drawing attention to its interior design as well as to the simple interface which replaces the many controls, buttons and displays seen in other vehicles. The screen glowed white, waiting for a driver to slide into their seat and then

Koichi Suga, general manager of Lexus Design Division, described how the idea for completely redesigning the driving

1 The Lexus LF-ZC concept car was chosen to host the Black Butterfly dual-interface cockpit control system
2 The micro-LED flatpanel display boasts better contrast and energy efficiency than earlier technologies

BUTTERFLY

experience through the car’s interface came about. In the early design stages, he explained, he printed out the shape and size of the monitor onto pieces of paper and stuck them on the dashboard to help visualise the final product.

To create the screen seen in the LF-ZC car, Suga chose micro-LED, a flat-panel display technology that boasts improved contrast and energy efficiency compared to traditional LCD technology.

Today, he explained, technology is advancing in a way that often supersedes human ability - but this shouldn’t be scary. Instead, we can create technology that forms a connection with the user, synchronising with their body.

The Black Butterfly is an example of how this connection could be used in the automotive industry.

EASE OF USE

When asked if he thinks that the LF-ZC will be easier to drive than a traditional car, Suga replied that if you were used to using a Nokia phone, iPhones with their keyboard-less interfaces might initially seem confusing, but once you’re accustomed to them, they’re intuitive and simple to use.

Lexus surrounded its launch with related installations in neighbouring rooms. These included A-Un, inspired by the Japanese concept of harmonised breath, in collaboration

with Japanese creative studios Six and Studeo; and Discover Together, a legacy project with elements designed by Japanese company Bascule, Northeastern University and in-house designers from Lexus.

The Black Butterfly aims to bring human machine interfaces closer together through synchronised interactions, reflected in A-Un’s butterfly-shaped screen, handwoven from 35 kilometres of Japanese bamboo fibre, which came to life as visitors approached it. Like the car’s user interface, the installation also responds to the heartbeat of the person standing in front of it, creating unique patterns.

Both the LF-ZC and the Black Butterfly remain concepts for now, so I asked Suga when Lexus customers can expect cars with the Black Butterfly technology to be made road legal and available for purchase. Lexus had initially set a target of the end of 2026, but Suga admitted that this had been too optimistic, and he was no longer sure when mass-producing such a complex operating system would be possible.

One thing remains, though: the Black Butterfly looks set to revolutionise the way that human beings and technology interact in future, blurring the boundaries between the two in ways that are beneficial to both. www.lexus.com

FASTERTHAN

» Design consultancy Tangerine drew on traditional Japanese colours, shapes and ideas about hospitality and applied them to the design of next-generation bullet trains, as Emilie Eisenberg reports

It’s difficult to find a train anywhere in the world that compares to the Shinkansen. A journey taken on a Japanese bullet train is a clean, quiet experience, providing unmatched views of the surrounding countryside.

But take the fact that some of its lines have been running now for almost thirty years and combine it with Japanese expectations of high standards, and it’s hardly surprising that a refresh is due.

Enter Tangerine, a British-born consultancy with offi ces in London and Seoul and the fi rst non-Japanese design partner to be involved in the Shinkansen’s design. Following its work on interiors and exteriors for clients including British Airways, Southwest Airlines and Japan Airlines, Tangerine was commissioned to design the new Shinkansen interior and exterior, in partnership with the East Japan Railway Company, JR-East. Other Japanese clients include Nikon, Seiko, ceramic sanitary ware brand Toto and offi ce furniture manufacturer Okamura.

STORIED PAST, HIGH-SPEED FUTURE

The Shinkansen relies on a network of high-speed railway lines, on which trains reach speeds of 200 miles per hour and connect all of Japan’s major cities. The first line linking Tokyo and Osaka opened in October 1964, followed by eight more lines. It’s one of the world’s busiest high-speed railway networks.

The new E10 Shinkansen will replace E2 trains that have been in use since 1997, and E5 trains, in use since 2011. The E10 will run on the Tohoku route, the longest train journey in Japan, covering 674.9 kilometres between Tokyo and Shin-Aomori. Although no set date has been confirmed, the new trains are expected to enter service in 2030 and run throughout the following decade.

Tangerine’s overall goal in designing the E10 trains was to create a manufacturable design that balances quality, budget, and, crucially, timelines. The design interior is inspired by traditional Japanese aesthetics and landscapes, with a consistent design visible across all passenger classes. It features a green and blue colour scheme with graduated

● 1 From 2030, the E10 Shinkansen will replace existing E2 and E5 trains on the Tohoku route
● 2 Interior designs for the E10 draw on Japanese landscapes, seascapes and cityscapes for inspiration

THANABULLET

effects in the wall panelling and seating upholstery. Soft lighting and cool colours are intended to reflect Japan’s nature, while also creating a calming atmosphere inside the carriages. The layout of carriages varies across classes, but all layouts are intended to make seating as comfortable and spacious as possible, while simultaneously maximising space and ensuring operational efficiency.

The train exteriors, meanwhile, are primarily green, borrowing colours from the mountainous Tohoku region. The two primary shades of green are a bright green called ‘Tsugaru green’, and a darker green called ‘evening elm’.

Curved line designs connect the carriages, inspired by Sakura cherry blossoms in an homage to previous Shinkansen designs.

“The E10 Shinkansen represents a milestone in UK/Japan collaboration within the rail sector, setting new benchmarks for hospitality-focused design and sustainable travel,” says chief creative officer Matt Round. “With its blend of Japanese spirit, innovation and usercentric design, the E10 Shinkansen is poised to redefine high-speed rail travel for decades to come.”

EXPLORING DESIGN OPTIONS

Early in the design process, Tangerine prioritised sketching, CAD modelling, mood boards and quick rendering to explore different design options. As the project progressed, Tangerine refined 3D renders of the seating, carriage and surfaces using SolidWorks and

Rhino, focusing on CMF (colour, material and finish). This work was led by Tangerine’s head of CMF, Monica Sogn. A separate team of graphics experts created the livery design.

The CMF team created mood boards to establish colours and themes, and applied different options to 3D models, in order to better understand how CMF interacted with the train carriage environment. Tangerine collaborated with multiple supply chain partners to specify exact colours, materials and finishes.

Blender was used for VR rendering and visualisation of the three seating classes, which were viewed using headsets in the studio. Using a clear space like a reception area or meeting room, the designers at Tangerine studied the carriages through headsets to get a sense of the layout of each class. Both VR and animated flythroughs were used for showing the designs to JR-East, including CEO Yoichi Kise. These showcased specific customisations and applications for design approvals and sign-offs.

The designs were optimised and edited multiple times using feedback from both stakeholders and manufacturing partners, who also viewed Tangerine’s renders and visualisations.

Looking ahead, Tangerine is eager to expand its footprint in Japan. The collaboration with JR-East has been a positive one, encouraging the studio to seek other work within the Japanese rail sector in particular, as well as other industries in general. www.tangerine.net

‘‘
With its blend of Japanese spirit, innovation and usercentric design, the E10 Shinkansen is poised to redefine high-speed rail travel for decades to come Matt Round, chief creative officer at Tangerine ’’
2

Full Colour 3D Print

It’s All Over

Your search for a 3D printer capable of producing complex & creative models in up to 10,00 0,00 0 colours is over T he new Mimaki 3DUJ-22 07 delivers extraordina r y detail in full colour.

E xplore a wor ld of colour for just

£ 3 4,995

RUNWAY-READY

» New York Embroidery Studio is keeping up with couture fashion’s cutting-edge requests with the help of Stratasys’s 3D Fashion technology, and hopes to show off the results at this year’s Met Gala, as Emilie Eisenberg reports

New York Fashion Week is a time for creativity and exploration. During this year’s celebrations, New York Embroidery Studio (NYES) hosted an open-house event in the Garment District where it showcased its collaboration with 3D printing specialist Stratasys.

Founded by Michelle Feinberg in 2002, NYES specialises in embroidery that embellishes the creations of many renowned designers. Its work is regularly seen at runway shows and at the New York Metropolitan Museum of Art’s annual Met Gala. Small production runs are handled by its Manhattan studio. Larger runs take place at partner factories in China and India. The studio’s in-house capabilities include laser cutting, hand sewing, embroidery – and more recently, 3D printing.

At NYES’s Fashion Week event, the focus was on direct-to-textile 3D printing. Here, the company uses the Stratasys J850 TechStyle printer, powered by Stratasys’s 3D Fashion technology. This prints in a wide range of colours and offers multi-material capabilities, printing directly onto fabric, garments, footwear and other luxury accessories.

Merging traditional embroidery techniques with 3D printing allows NYES to streamline workflows and reduce material waste while creating its personalised, intricate designs. Designs can be printed precisely, with consistent quality and able to withstand wear and tear.

Product development is sped up by rapid prototyping, and performance is enhanced by the use of materials including carbon fibre composites for durable yet lightweight designs.

“The J850 TechStyle is an extraordinary addition to our capabilities, allowing us to elevate creativity while delivering on our commitment to innovation and sustainability,” says Michelle Feinberg, owner and

creative director of NYES. “Our clients are thrilled by the possibilities this technology opens up, from high-end fashion to VIP and entertainment projects.”

NYES’s designers have used the J850 printer to create elaborate, embroidered illustrations that include fruits, insects, flowers and intricate lace designs.

ELEVATING CREATIVITY

The Met Gala is considered the most prestigious fashion event in the world, attended by big-name stars from the worlds of the arts, fashion, sports and politics. The theme of this year’s Met Gala, due to take place in May, is ‘Superfine: Tailoring Black Style’, which ties in with an exhibition of the same name, exploring the importance of sartorial style to the formation of Black identities around the world.

As ever, NYES hopes to be involved in creating the dresses and outfits of Gala invitees, many of which will be entirely custom-made and fitted. This year, it will be able to bring 3D printing into the mix, in order to create even more eye-catching designs.

“Our latest 3D-printed swatches bring intricate textures and elevated details that can take any tailored look to the next level,” says Feinberg. “Whether it’s sculptural embellishments, modern lacework or dimensional patterns, these elements add a unique, innovative touch to classic craftsmanship.”

But the Met Gala is just one of many events at which NYES hopes to showcase its collaboration with Stratasys and demonstrate how 3D printing could change the rarified world of haute couture. On social media, Feinberg is already active in promoting the potential of this partnership to NYES fans and clients. Its work with Stratasys has only just begun. www.nyembroiderystudio.com

(Above) Stratasys’s 3D Fashion technology is designed for printing directly onto fabric (Below) NYES provides colourful details that embellish the work of big-name designers

First Look: Jester3D

» To meet the huge demand for marketing content, digital creative agency Mediaman has developed its own visualisation tools that harness the benefits of AI. Stephen

Holmes discovers a workflow that offers users improved control and better results

By now, many readers will have had the chance to try out some of the AI-enabled visualisation tools offered online – and you don’t need to be an experienced visualisation specialist to quickly spot their flaws. Weirdly merged bodies, gobbledy-gook text and unrealistic lighting are just some of the immediate issues, not to mention the questions they raise about security and intellectual property protection.

At the same time, many would agree that the speed at which these products can build something useful from relatively little is impressive. It’s an ability that most creatives would like to harness.

That’s certainly true of the creatives at Chinese digital marketing agency Mediaman, which has recently added its in-house developed Jester3D software to its list of achievements. While the software is not yet openly available, it’s one of the first to target some of the most acute pain points of the AI visualisation process and give users some control, while still tapping into the benefits of speed and ease of use.

Steeped in digital, interactive content production for brands including Mercedes-Benz, Porsche,

Bosch, UnderArmor and the South Korean cosmetics giant Amore Pacific, Mediaman knows how to transform workflows, both for its own internal teams and for its clients.

Much of this is about controlling scenes and keeping rendered products as close to the real thing as possible. The company already develops databases of hiresolution 3D digital twins for its clients using a mix of CAD data, 3D scanning and human artistry, and these assets form the key data for Jester3D.

Controlled, colour-accurate, and offering perfect life-like detail, these assets can be taken by a user and embedded in scenes and scenarios quickly and with no need for an extensive knowledge of visualisation software.

But in an age of throwaway content, Mediaman knows that the lifetime of a marketing image can be brutally short. Brands can’t be seen to be reusing the same image across social media, multiple times a day – but tight budgets and schedules can make it a real challenge to outsource content creation.

In response, Mediaman has developed Jester3D, which combines tools that already exist on the market, but brings them together in a way that makes them more accessible to a broader audience.

SPEEDY RESULTS

Web-based Jester3D utilises Three.js on WebGL technology, allowing it to be run on any modern laptop with speedy results. The Jester3D back end uses Stable Diffusion to power its AI elements, while the front end UI is skinned for each of Mediaman’s unique client dashboards.

This manages users and their permissions, gives an overview of all their brands, products and marketing campaigns, and is designed for ease of use by those not versed in 3D CAD and rendering software.

One key element is that Jester3D allows for the creation of projects. The agency’s own experience of working with clients is embedded deep in the product, enabling users not only to create content, but also edit or adjust it based on feedback.

To create a new project, you simply name it and choose one of the brands from the client, which then gives you access to its product assets, all professionally produced digital twins. Using the 3D-to-image workflow, you can add multiple products to your scene, where they form key building blocks.

You’re then ready to add in other premade assets from the Mediaman library; for example, flowers and foliage for use in a perfume advertisement. There are 1

● 1 Jester3D enables users to set up visualisations, using AI to guide the application of background, textures, colours and lighting ● 2 ● 3 A product’s digital twin is produced by Mediaman, offering a high level of detail and fidelity ● 4 ● 5 The ability to add shapes to the set-up means users can guide the AI to position buildings and other surroundings

some pre-sets available, which add to the intuitive nature of the toolset, helping someone completely new to all this get to grips with the process quickly.

The clever bit is the ability to add in shapes – the greyscale spheres, prisms and cuboids used to guide the AI in the prompt stages. These can act as plinths for a perfume bottle, for example, or the positioning of buildings surrounding a car.

By using these 3D elements, you immediately gain a lot more control of a scene. Equally, the lighting and perspective can be altered as you’re not simply working with a static 2D PNG file, but 3D entities.

The next step is for Jester3D to take all this data – the type and position of objects, perspective and lighting requirements –and bring it all together on the back-end render server in a minute or so, creating a high-resolution, ray traced, quality render to use as the base for the AI generation.

From there, it comes down to the AI inputs. They are: Positive Prompt, Negative Prompt, Pick an AI Framework, Prompt Strength, Style Templates. (The Mediaman team, like others we’ve spoken to working in this field, recommend using an AI such as ChatGPT or similar to write prompts.)

In our example, a fragrance scene, the prompt for natural stone material takes its position and rough volume from the cube we placed in the scene. The perspective is matched and the dappled light takes its direction from the input, with realistic reflections and shadows.

The digital twins remain photorealistic and perfect in the final scenes, maintaining fine details such as text and labels, and this carries over into the finished scenes. Surrounding features respond well to the scene-setting procedures used.

Some AI outcomes still lack the required finesse of commercially usable content. Some need some postproduction work, while others may fall wide of the mark and require further prompt changes.

However, what Jester3D is good for is churning out lower res web content – for social media posts, images for e-commerce sites and similar – and feeding back ideas and concepts to the professional designers at Mediaman.

This is evident in the export settings at present, capable of 2k output at 2048 x 2048 pixels. The sizing allows greater

output speed, and while the versatile square format proves ideal for social media, a key reason is that most AI tools are trained on square images, so this format usually gives the best results.

MEETING DEMAND

Jester3D is already being used by Mediaman clients, with one cosmetics company using it to produce several hundred images on a weekly basis for its social media and e-commerce platforms in China, with an ever-improving success rate.

While the immediate focus for Jester3D is clearly Mediaman’s own clients, this toolset has great potential as a licensable software for in-house marketing teams in the future as companies look to boost digital creation skills.

With few marketing teams having the skills to develop 3D visualisation work from scratch, let alone the budget to spend on multiple seats of 3D design software, a co-creation mode, for designers to work alongside marketing teams and play around with different camera perspectives, prompt moods and styles, is something Mediaman is developing.

In a crowded market, the offered experience of greater client involvement and even co-creation definitely helps with buy-in on ideas.

There’s also a potential use case here for product designers, helping them to increase their render output, by moving beyond their existing toolset for complex and timeconsuming hi-res workflows, to more agile AI-enabled output tools for faster, simpler jobs. That could free up visualisation specialists, or simply reduce the cost associated with visualisation tool spend.

The team at Mediaman says that an enhanced version is in the works, capable of 4k output with greater reflections and refractions and able to be used in print or on billboards. And that could spell even more value for end users.

In its current form, Jester3D allows clients to communicate with agency teams with more ease and greater clarity. In best case scenarios, they can even create some content themselves. It’s an interesting way to package a toolset and provides a glimpse of a future in which companies develop software themselves using AI to increase productivity and build stronger relationships with customers.

HP ZBook Ultra G1a / AMD Ryzen AI Max Pro

» It’s not often a mobile workstation comes along that truly rewrites the rulebook, but this 14-incher does just that, with an integrated graphics processor that has fast access to more memory than even the most powerful discrete laptop GPU, writes Greg Corke

Intel has dominated the mobile workstation sector for decades. Its processors have powered everything from ultra-compact 14-inch models to high-performance 17-inch behemoths. Today, Intel Core chips are almost always paired with discrete Nvidia laptop GPUs. AMD hardly gets a look in.

But with the new 14-inch HP ZBook Ultra G1a, and its powerful AMD Ryzen AI Max Pro ‘Strix Halo’ processor, that balance could be starting to shift.

The AMD Ryzen AI Max Pro comes with an integrated RDNA 3.5-based AMD Radeon GPU with professional graphics driver. But unlike integrated GPUs of the past, this one flexes some serious muscle. Together with 16 high-performance ‘Zen 5’ CPU cores, this means the HP ZBook Ultra G1a can deliver unprecedented levels of performance in a highly portable 14-inch form factor.

But there’s more. Unlike a discrete GPU with a fixed pool of memory, the integrated AMD Radeon GPU can tap directly into up to 96 GB of high-speed system memory—far exceeding what any other laptop GPU provides. As long as the laptop is properly configured you shouldn’t have to worry about datasets exceeding the GPU memory limit, which can happen with discrete laptop GPUs with 8 GB or even 16 GB of memory. When that limit is breached, workflows can grind to a halt—or worse, cause the software to crash.

THE 14-INCH MOBILE POWERHOUSE

‘14-inch’ and ‘powerhouse’ are two words not often used together, but the HP ZBook Ultra G1a delivers the kind of performance you’d usually expect from a much larger 15-inch or 17-inch laptop. For architects, engineers and product designers, it’s the first 14-inch mobile workstation that can truly be used for GPU-accelerated 3D visualisation. Most other 14-inch models are largely limited to 2D / 3D CAD

and BIM workflows.

The AMD Radeon 8060S GPU in the HP ZBook Ultra G1a—integrated into the top-tier AMD Ryzen AI Max+ Pro 395 processor—is significantly more powerful than those typically used in other 14-inch mobile workstations. Its performance is comparable to the Nvidia RTX 3000 Ada (8 GB), a discrete GPU usually reserved for larger laptops with higher power demands.

Most 14-inch laptops are limited to Nvidia RTX 500 Ada (4 GB), Nvidia RTX 1000 Ada (6 GB), or Nvidia RTX 2000 Ada (8 GB) GPUs. These entry-level GPUs are not only less powerful, on paper, but have access to far less memory - limitations that can impact demanding workflows (more on this later).

The HP ZBook Ultra G1a also marks a massive leap forward from HP’s previous AMD-based mobile workstation, the HP ZBook Firefly G11A ( read our reviewwww.tinyurl.com/D3D-Firefly ), which comes with 2024’s AMD Ryzen Pro 8000 Series processor and integrated AMD Radeon 780M GPU. In most GPU intensive tasks, the HP ZBook Ultra G1a is around two to three times faster.

While all eyes are on the GPU, the HP ZBook Ultra G1a also comes with a very powerful ‘Zen 5’ CPU with 16-cores and 32-threads. In multi-threaded

only available in 15-inch and 17-inch models. However, in single threaded or lightly threaded tasks such as CAD and BIM, Intel still appears to have the lead.

AMD Ryzen AI Max Pro has twice the number of cores as the older AMD Ryzen Pro 8000 Series processor which comes with eight ‘Zen 4’ cores. This gives it a massive uplift in highly multithreaded workflows. The AMD Ryzen AI Max Pro platform also supports much faster memory — 8,000MT/s LPDDR5X compared to DDR5-5600 on the AMD Ryzen Pro 8000 Series — so memoryintensive workflows like simulation and reality modelling should get an additional boost.

Like most new laptop chips, the AMD Ryzen AI Max Pro also comes with a Neural Processing Unit (NPU), capable of dishing out 50 TOPS of AI performance, meeting Microsoft’s requirements for a CoPilot+ PC.

While 50 TOPS NPUs are not uncommon, it’s the amount of memory that the NPU and GPU can address that makes the AMD Ryzen AI Max Pro particularly interesting for AI.

In theory, having access to large amounts of memory should allow the processor to handle large AI workloads,

‘‘
In the HP ZBook Ultra G1a, it feels like we’re witnessing a genuine shift in the landscape for 14-inch mobile workstations ’’

such as multi-billion parameter large language models (LLMs), which would not fit into the fixed memory of a discrete GPU.

On a more practical level for architects and designers, the chip’s ability to handle large amounts of memory could offer an interesting proposition for text-to-image generators like Stable Diffusion, which are increasingly used for ideation for early stage design. It should be able to output higher resolution images without the massive slow-down that typically happens when GPU memory becomes full (read our GPUs for Stable Diffusion articlewww.tinyurl.com/D3D-stable).

A MASSIVE POOL OF MEMORY

Discrete GPUs, such as Nvidia RTX, have a fixed amount of on-board memory. In a 14-inch mobile workstation this is 4 GB, 6 GB or 8 GB. In contrast, the AMD Radeon GPU, built into the AMD Ryzen AI Max Pro processor, has direct and fast access to a massive, unified pool of system memory. It can use up to 75% of the system’s total RAM, allowing for up to 96 GB of GPU memory when the HP ZBook Ultra G1a is configured with its maximum 128 GB. This means the mobile workstation can handle certain workloads that simply aren’t possible with other laptop GPUs. When a discrete GPU runs out of memory it must ‘borrow’ some from system memory, but as data must be transferred over the PCIe bus, this is highly inefficient.

Performance can drop dramatically depending on how much memory the GPU needs to borrow. Render times increase and frame rates can fall from double digits to low single digits—making it nearly impossible to navigate models or scenes. In some cases, the software may crash entirely.

The HP ZBook Ultra G1a allows users to control how much memory is allocated to the GPU. In the BIOS, simply choose a profile - from 512 MB, 4 GB, 8 GB, all the way up to 96 GB (should you have 128 GB of RAM to play with). Of course, the larger the profile, the more it eats into your system memory, so it’s important to strike a balance.

The amazing thing about AMD’s technology is that should the GPU run out of its ringfenced memory, then it can seamlessly borrow more from system memory, if available, temporarily expanding its capacity. Since this memory resides in the same physical location, access remains very fast. Even with the smallest 512 MB profile, when borrowing 10 GB, we found 3D performance only dropped by a few frames per second, maintaining that all-important smooth experience within the viewport.

This means that if system memory is in short supply, opting for a smaller GPU memory profile can offer more flexibility by freeing up RAM for other tasks.

PREMIUM PORTABILITY

As a laptop, the HP ZBook Ultra G1a is a very impressive piece of kit —solid, exceptionally well-built, lightweight, and slim. Weighing just 1.50 kg and 18.5mm in profile, it’s the thinnest ZBook ever. Considering the sheer processing power packed inside, that’s nothing short of remarkable.

The slender chassis is made possible by the power efficient AMD processor, which is rated at 50W. Under heavy loads it draws 70W at peak, regardless of whether you’re hammering CPU or GPU. For extreme multi-tasking, power is shared between both processors, resulting in lower clock speeds.

The HP ZBook Ultra G1a comes with a 140W power supply, chunkier than most other 14-inch mobile workstations. But it’s still USB-C, so you get flexibility. For cooling there’s an improved HP Vaporforce thermal system that incorporates a vapour chamber with large dual turbo fans, along with expanded rear ventilation. This keeps the system running cool and relatively quiet. Fan noise was perfectly acceptable and consistent, even when rendering for hours.

With CPU renderers - KeyShot, V-Ray, Corona Render and Cinebench - all-core frequencies ranged from 3.5 GHz to 3.7GHz. In GPU renderers – Twinmotion, D5 Render, KeyShot, and Lumion – GPU clock speeds fell between 2,600 MHz and 2,850 MHz. Running off battery, the system’s power consumption drops to 40W, leading to reduced all-core CPU frequencies of around 2.5 GHz and GPU frequencies of 1,700 MHz. At these levels, fan noise was virtually silent. The HP XL-Long Life 4-cell, 74.5 Wh polymer battery delivered 95 minutes of runtime under full GPU load in Twinmotion, and 125 minutes in the Solidworks SPECapc benchmark, which combines demanding graphics with intensive single- and multi-threaded CPU tasks. For typical day-to-day use— where modelling happens in short bursts and many editing tasks are relatively lightweight—you can expect significantly longer battery life. The good news is that the laptop charges quickly, reaching 50% in just 29 minutes and 80% in 55 minutes. There are two display options: a standard FHD (1,920 × 1,080) panel, and one of the standout features of our test machine — a stunning 2,880 × 1,800 OLED touchscreen with a 120Hz refresh rate, 400 nits of brightness, and 100% DCI-P3 colour gamut. There’s only a single NVMe

TLC SSD (512 GB – 4 TB), which is hardly surprisingly given the size of the machine.

The HP ZBook Ultra G1a is well equipped with ports –two USB-C (40 Gbps) Thunderbolt 4 with DisplayPort 2.1, one USB-C (10 Gbps) with DisplayPort 2.1 - all supporting power delivery - and one USB Type-A (10 Gbps) with charging support. While there’s no RJ-45 Ethernet port, wired connectivity is still possible via a USB adapter, and you can also get fast data transfer from built in Wi-Fi 7.

In use, the keyboard feels solid and responsive, while the large trackpad is smooth and natural. That said, serious 3D modelling is always best handled with an external mouse.

Finally, for Teams and Zoom calls, there’s an impressive 5 MP IR camera with Poly Camera Pro software. Advanced features like AutoFrame, Spotlight, Background Blur, and virtual backgrounds are powered by the 50 TOPS NPU, optimising power efficiency and helping maximise battery life. The camera shutter’s black-and-white stripes is a nice touch, making it immediately obvious when it’s closed.

The elephant in the room is the price. If you’re used to budget-friendly 14-inch mobile workstations — or assumed an integrated GPU would save you money — this might come as a shock. Our fully loaded review unit — featuring an AMD Ryzen AI Max+ Pro 395, 128 GB of RAM, 2 TB SSD, and a 14” 2.8K touch display — comes in at a hefty $4,049.

There are savings to be had, though. Dropping to a 1 TB SSD knocks off a significant $490 (bringing it down to $3,559), and opting for 64 GB of RAM instead of 128 GB saves another $460 (down to $3,099).

It certainly pays to shop around — prices vary wildly. We’ve even seen our exact review configuration listed on hp.com for a jaw-dropping $9,060.

ON TEST

We put the HP ZBook Ultra G1a to work in a variety of real-world CAD, visualisation, simulation and reality modelling applications. Our test machine was fully loaded with the top-end AMD Ryzen AI Max+ Pro 395 processor and 128 GB of system memory, of which 64 GB was allocated to the AMD Radeon 8060S GPU. All testing was done at the OLED display’s maximum 2,880 x 1,800 resolution.

We compared the laptop to the previous generation HP ZBook Firefly G11A, as well as a range of desktop workstation CPUs and desktop workstation GPUs with 8 GB of memory. While we didn’t have access to recent Intel-based mobile workstations for testing, the comparisons still offer valuable context—particularly in terms of GPU memory limitations.

COMPUTER AIDED DESIGN (CAD)

In 3D CAD software Solidworks 2025 the HP ZBook Ultra G1a easily handled

everything we threw at it. While the previous generation HP ZBook Firefly G11A with AMD Ryzen Pro 8000 Series processor stuttered with very large models, the HP ZBook Ultra G1a took everything in its stride. It even made light work of the colossal Maunakea Spectroscopic Explorer telescope assembly with 8,149 parts and 58.9 million triangles, delivering silky smooth model navigation. As Solidworks is largely single threaded or lightly threaded, there wasn’t a huge difference between the ZBook Ultra G1a and ZBook Firefly G11A in terms of computational performance.

VISUALISATION

Real-time GPU rendering / visualisation is where you start to see the true benefits of the HP ZBook Ultra G1a. In workloads that require a lot of GPU memory, having access to a large pool gives it a significant performance edge over systems with memory-constrained discrete GPUs.

In many viz tools, memory usage rises significantly with output resolution of final renders. In Solidworks Visualize, for example, a small suspension model uses 1.2 GB to load but then a considerable 7.4 GB to render at 4K and 12.8 GB to render at 8K. The much larger snow bike model with 32 million polygons uses 12.4 GB to render at 4K and 19.1 GB to render at 8K.

Of course, if you load the equivalent CAD model in Solidworks at the same time, as most designers would, that adds an additional 5.0 GB to the overall GPU memory footprint. In multi-application workflows like these, when there’s not enough GPU memory to go around, each app must release memory before the other app can claim what it needs.

This transition can cause a stutter as memory is freed and allocated. And if you’re trying to render in the background while continuing to model in Solidworks, insufficient memory will likely result in much longer render times.

We found Twinmotion by Epic Games demands even more from GPU memory.

The Snowdon Tower scene, for example, which is fairly typical of a mainstream arch viz dataset, uses 8.1 GB of GPU memory simply to load, 9.8 GB to render at FHD resolution, 14.2 GB to render at 4K and a whopping 35.1 GB to render at 8K!

Of course, with 64 GB of GPU memory to work with, the HP ZBook Ultra G1a handled the task with ease. But what happens when a discrete GPU runs out of memory?

To render six 4K images, the 8 GB desktop GPUs were around 6 GB short, resulting in significantly longer render times.

The HP ZBook Ultra G1a completed the job in just 183 seconds, while the AMD Radeon Pro W7500 (8 GB) took 659 seconds, the Radeon Pro W7600 (8 GB) 688 seconds, and the Nvidia RTX A1000 (8 GB) 799 seconds.

The important thing to note here is that while these desktop GPUs may only appear slightly less powerful on paper, they

shouldn’t perform dramatically slower. However, once GPU memory is maxed out and the discrete GPU starts swapping to system RAM, performance drops sharply and render times increase significantly.

We saw even more dramatic behaviour in Lumion 2024, where our test scene uses 12 GB of GPU memory when rendering at 4K. With both 8 GB Radeon Pro desktop GPUs, the software simply couldn’t cope with running out of memory and subsequently crashed.

Of course, for lighter workloads, which require less than 8 GB, render times are much more comparable. In the D5 Render benchmark, for example, which uses well under 8 GB of GPU memory, the HP ZBook Ultra G1a completed the task in 266 seconds. By comparison, the AMD Radeon Pro W7500 (8 GB) took 386 seconds, the Radeon Pro W7600 (8 GB) 261 seconds, and the Nvidia RTX A1000 (8 GB) 294 seconds.

With 64 GB of GPU memory at its disposal, the HP ZBook Ultra G1a never approached its limit—even with our most demanding models. However, as datasets grow larger, although it can continue rendering, navigating and working within the 3D environment can become increasingly challenging.

In D5 Render, for example, the HP ZBook Ultra G1a took 795 seconds to render a colossal lakeside model with 2,177,739,558 faces at 4K resolution, using 21 GB of GPU memory. However, viewport performance dropped to just 4.3 frames per second, making it difficult to navigate the scene—a serious productivity killer. For very large models like this, a more powerful GPU such as the Nvidia RTX 5000 Ada (16 GB) would probably be a better choice and you’d also be able to take advantage of Nvidia DLSS to boost real time performance using AI. However, that would require stepping up to a 16inch or 18-inch mobile workstation.

It’s also important to note that AMD’s RDNA 3.5-based GPUs generally trail Nvidia when it comes to hardware ray tracing performance. For example, in Lumion 2024, the HP ZBook Ultra G1a is only 1.44 times slower than a desktop Nvidia RTX 2000 Ada GPU in raster rendering, but that gap widens to 2.61 times slower during ray-trace rendering. More critically, enabling path tracing in Twinmotion on the HP ZBook Ultra G1a caused the software to crash.

THE GPU SOFTWARE SHIFT

Over the past decade, AMD GPUs have become incompatible with certain professional 3D software tools. While AMD GPUs work directly with open standards like DirectX 12, the graphics API which forms the backbone to Unreal Engine, Lumion, Twinmotion, Revit, D5 Render and others, any software developed with Nvidia CUDA has not been able to run on AMD Radeon (Pro) GPUs.

The good news is this is starting to change. As Independent Software

‘‘ ‘14-inch’ and ‘powerhouse’ are two words not often used together, but the HP ZBook Ultra G1a delivers the kind of performance you’d usually expect from a much larger 15-inch or 17inch laptop ’’

● 1 Rendering this Twinmotion Snowdon Tower scene requires 14.2 GB of GPU memory at 4K resolution and a whopping 35.1 GB at 8K resolution

● 2 Running out of GPU memory can have serious consequences. In Lumion, our 12 GB test scene renders without issue on the HP ZBook Ultra G1a, thanks to the AMD Radeon 8060S having access to 64 GB of shared memory. However, on a desktop workstation with an 8 GB AMD Radeon Pro GPU, the same scene causes the software to crash

● 3 GPU memory usage is not limited to a single application. Here Solidworks CAD uses 5.0 GB while Solidworks Visualize uses 12.4 GB to render at 4K

● 4 There are indirect limits to model size. This colossal D5 Render scene needs more than 20 GB of GPU memory but still renders fine on the HP ZBook Ultra G1a. However, because of its complexity, viewport performance drops to just 4.3 frames per second, making it difficult to navigate the scene - a real productivity killer

Vendors (ISVs) cotton on to the huge potential of AMD’s new integrated graphics technology, there appears to be real appetite to support it. AMD has collaborated with several ISVs to port Nvidia CUDA code over to AMD’s HIP framework. For visualisation, software includes Luxion KeyShot, Maxon Cinema 4D and Redshift, and Rhino with Cycles. For simulation, there’s Altair Inspire and Ansys Discovery.

Of course, there’s still a lot of work to do. Some AEC-focused ISVs depend on Nvidia GPUs to accelerate specific features. In reality modelling software Leica Cyclone 3DR, for example, AI classification is built around Nvidia CUDA (read our workstations for reality modelling article - www.tinyurl. com/D3D-reality).

One of the most interesting developments is from Luxion, which currently supports AMD GPUs in a beta version of KeyShot 2025.2.

2 3 4

Luxion was a long-time advocate of CPU rendering, but in 2020 finally embraced the GPU, adding render support for Nvidia RTX through OptiX. Today, many of its customers use GPUs to benefit from faster results. However, CPU rendering remains essential for more complex projects. When a discrete GPU runs out of memory and must swap out to system RAM, rendering performance can slow significantly and warning messages appear. If memory demands then increase further, the system can eventually crash. With the HP ZBook Ultra G1a KeyShot customers have real choice - render on CPU or GPU, it’s doesn’t matter as they can both have access to the same memory.

We tested the hardware in KeyShot 2025.2 beta, using an absolutely colossal multi room supermarket model courtesy of Kesseböhmer Ladenbau that uses 18.1 GB of GPU memory simply to load. The scene features 447 million triangles (nearly 900 times more than KeyShot’s sample headphone scene) and includes 2,228 physical lights and 237 382 parts with incredible detail. There are chiller cabinets, cereal boxes, and 3D fruit and vegetables! In fairness, this model is probably too big to use on a laptop like this, day in day out. While FHD renders took under 10 minutes, we found it very hard to move around the scene.

But the key point here, is that it would have been previously unthinkable for a scene this complex to be GPU-rendered on a 14-inch laptop.

DON’T FORGET THE CPU

It’s easy to forget that the HP ZBook Ultra G1a also has an incredibly powerful CPU. In our CPU rendering benchmarks, its scores in V-Ray 6,0 (33,285), Cinebench 2024 (1,596), KeyShot 11,.3 (4.15) and Corona Render 10 (9,923,886) are almost exactly twice that of the HP ZBook Firefly G11A - although this is hardly surprising given it has twice the number of cores.

What might surprise you is that

compared to the desktop AMD Ryzen 9 9950X processor, which boasts the same number of ‘Zen 5’ cores but draws 230 watts vs 70 watts at peak, there’s not that big a difference.

The desktop AMD Ryzen 9 9950X is only 45% faster in V-Ray, 39% faster in Cinebench, 44% faster in KeyShot and 50% faster in Corona Render 10.

THE COMPETITION

Mobile workstation manufacturers— including Dell, Lenovo, and HP—are currently in a transitional phase. New models featuring Intel and Nvidia processors have been partially announced, and by July we can expect a full rollout of workstation-class laptops powered by Intel Core Ultra (Series 2) CPUs and Nvidia Blackwell laptop GPUs.

It’s important to note that Nvidia is increasing memory across its professional GPU lineup. However, this upgrade is modest on the models most likely to appear in 14-inch laptops. The RTX Pro 500 Blackwell (6 GB), RTX Pro 1000 Blackwell (8 GB), and RTX Pro 2000 Blackwell (8 GB) all remain at 8 GB or below.

In contrast, the higher-end models— RTX Pro 3000 Blackwell (12 GB), RTX Pro 4000 Blackwell (16 GB), and RTX Pro 5000 Blackwell (24 GB)—offer significantly more memory than their Ada Generation predecessors. This capacity lift could make them much better equipped to handle more demanding datasets. And let’s not forget that the RTX Pro 4000 Blackwell and RTX Pro 5000 Blackwell will likely offer

significantly more performance than the integrated AMD Radeon 8060S GPU in the HP ZBook Ultra G1a.

CONCLUSION

In the HP ZBook Ultra G1a, it feels like we’re witnessing a genuine shift in the landscape for 14-inch mobile workstations. This isn’t just about cramming more power into a device that slips easily into a backpack — it’s about redefining what’s possible with integrated GPUs. And, with one eye on the future, perhaps even workstation GPUs in general.

HP is currently the only major workstation OEM to take on the processor, but Dell and Lenovo will certainly be paying close attention.

Naturally, there’s a practical limit to how large models the laptop can handle. An architect or designer might tolerate longer render times with a GPU that’s not in the high-end league. But if that tradeoff means only being able to navigate a model at a few frames per second, it can quickly lead to frustration. In some cases, full interactivity from a more powerful GPU will outweigh the benefits of ultimate portability. It’s all about finding the right balance.In saying that, not all pro applications need a fully interactive 3D viewport, and the HP ZBook Ultra G1a could be a good fit for GPU-accelerated simulation and reality modelling - not forgetting CPU-accelerated workflows too. Of course, there are still some software compatibility hurdles to overcome, particularly in CUDA-only tools. But

from the software developers we have spoken with, there appears to be genuine excitement about the new technology. For the first time in years, it feels like the playing field is beginning to level. To sustain this momentum, AMD will need to continue to invest in software development.

Perhaps the most exciting thing is where this technology could be heading — both in the short term and further down the line.

HP has already announced the HP Z2 Mini G1a, a micro desktop workstation powered by the same AMD Ryzen AI Max Pro processor. But with a bigger 300W power supply, it’s likely to deliver significantly better performance.

Could we also see AMD Ryzen AI Max Pro make its way into larger mobile workstations, which typically come with 230W power supplies? It certainly seems possible.

Looking further ahead, we’re eager to see what the rumoured Zen 6-based ‘Medusa Halo’ processor — expected to launch next year — might bring to the table. According to the “Moore’s Law is Dead” a respected tech-focused YouTube channel, it could offer a 30% to 50% GPU performance boost over the current AMD Ryzen AI Max Pro chip.

If these predictions hold true, the AMD chip could deliver a substantial edge when handling larger models, potentially redefining the capabilities of both mobile and desktop workstations.

www.hp.com | www.amd.com

● 5 With access to 64 GB of GPU memory, the HP ZBook G1a can render scenes that would typically need to be handled by a CPU renderer. In KeyShot 2025.2 beta we manged to GPU render this colossal multi room supermarket model from of Kesseböhmer Ladenbau that features 447 million triangles and uses up 18.1 GB of GPU memory simply on loading

Thanks to AI, many of us may soon be abandoning tools and skills it took us years to acquire. But don’t be glum, writes Stephen Holmes. There’ll still
be aneed for existing knowledge and specialisms well into the future

It was not a question I was expecting to receive, surrounded by all the cutting-edge technologies on display at this year’s DEVELOP3D LIVE: “Can you count in Cumbrian?”

The challenge to speak in the bygone language of a far-flung corner of this island startled me. The Cumbrian method of counting dates back to the sixth century, long after the Romans had got sick of the British weather and shuffled off, leaving the keys to Hadrian’s Wall in the door. It’s a rhythmic, tried-and-tested system used by shepherds and farmers to count flocks: you count up to twenty, move a pebble to a different pocket to mark that tally, and then start again. Five pebbles, one hundred sheep. You get the idea. One of our attendees – and you know who you are – seized on the opportunity to test my claim to Cumbrian heritage. At first, I stumbled to remember the sequence I learned as a child. Much like everyone else in this country, it’s not a system you might use to purchase a flat white or give directions to a taxi driver.

But a moment later, I remembered. Cumbrian counting was still there in my subconscious, through muscle memory, maybe, or hard-baked into my DNA.

LOST KNOWLEDGE

We live in an age in which to stop advancing could lead to obsolescence, like great white sharks drowning if they stop swimming. Globalised knowledge can be instantly searched, but recently, I’ve found myself thinking about all the designers and engineers who developed skills that have since been forgotten or worked on products that are now redundant.

How many of you worked on a device like a Blackberry or a CD player? They’re both products created sufficiently recently to have been developed in 3D CAD, but they’ve already been assigned to the museum of curios we all tend to keep in a bottom drawer somewhere.

The mammoth task undertaken by Goodwin Hartshorn and described in this issue’s cover feature, to design the most comfortable earbud available on the market today, got me thinking. It’s a fascinating tale and a testament to the extent that some designers will go to, if truly committed to their craft.

Take a step along any high street or travel anywhere on public transport, and earbuds are ubiquitous. So what is their future evolution likely to entail? It’s not a great leap of the imagination to envisage some new-fangled technology replacing earbuds as we know them. And where will that leave all the research, ideation and craft invested in today’s products?

That knowledge will, of course, continue to be of use further down the line, albeit in a less direct sense. Yet a lot of ideas and development, once no longer deemed necessary, go the way of the Blackberry.

A FRESH LOOK AT PROGRESS

We see a similar situation when we look at the designer’s use of technology and the skill sets necessary to master it.

In this issue of DEVELOP3D, we touch on several visualisation projects. Once the domain of specialist CGI wizards, complete with cauldrons full of textures, tips and shortcuts, now it seems that every designer is comfortable with cranking out a rendering. With their straightforward means of importing a CAD model, friendly user interfaces and excellent results, many tools today are as easy as ‘drag and drop’.

That’s not to dismiss visualisation specialists and their combined knowledge – far from it. Visualisation experts are out there making a healthy living by being at the top of their game and producing the highest fidelity output. A top-level skillset is still, quite rightly, in demand.

This in turn brought me to consider AI and its increasing role in our industry. AI is unlocking the ease of use of a lot of products, at the cost perhaps of replacing years of hard-earned know-how.

Take comfort in the fact that all the knowledge you’ve accrued won’t be entirely wasted. Some of it is irreplaceable, some parts are transferable

However, there’s only so much that can be done with a text prompt. It’s tough to achieve the control necessary to build something so realistic that not only will the human eye believe the product is real, but also get the lighting and context to match perfectly.

In short, AI is set to blow the doors off gatekeeping – in this case, the 10,000 hours of training and practice deemed necessary to become an expert at visualisation. With such tools, someone in marketing doesn’t need to learn a new software inside out simply to have a nice picture to upload to the company Instagram account.

Just like other software enhancements, the majority of us will drop some tools and skills from a workflow and replace them with AI-enhanced software that does the legwork for us. This will give us more time to do other things, and a small pool of specialists will remain on standby, ready to tackle the toughest challenges.

So, take comfort that all the knowledge you’ve accrued won’t be entirely wasted. Some of it is irreplaceable, other parts are transferable. A lot of it will simply fade into the darkness at the back of your mind until one day you need to recall it, in response to an unexpected question.

Or, as we say when we count in Cumbrian: Yan, tyan, tethera, methera, pimp.

GET IN TOUCH: Watching cartoons with his son, Stephen is often reminded of other activities to which he’s lost countless hours – TV, movies, books, sports – although he still maintains it will all come good one day in a pub quiz. On Twitter, he’s @swearstoomuch

DEVELOP3DSERVICES

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.