Bringing a Localized Experience Across the Globe

The globalization of retail isn’t slowing but rather shifting to online distribution channels. eShopWorld is key conduit for bringing brands and retailers into new markets in a seamless way, specializing in the localization of the customer experience. In this episode, hear from founder and CEO Tommy Kelly as he discusses creating agile frameworks for cross market expansion, the growing impact of BOPIS and contactless retail and the importance of local market knowledge and ecosystem coordination.

About the Guest

Tommy Kelly is founder and CEO of eShopWorld, the global e-commerce technology and services partner chosen by the world’s best-loved apparel, beauty, footwear, and luxury brands to power their international expansion.

Prior to eShopWorld, Tommy founded Two-Way Forwarding and Logistics, which he sold to Aramex in 2006, having grown it to a $100m+ business. Subsequently, Tommy became CEO of Aramex Europe & North America, before founding eShopWorld in 2010.

eShopWorld is an award-winning company employing more than 400 people, including teams of experts all over the globe helping to engineer the customer journey across strategy, technology, marketing, payments, logistics, compliance, and customer service.

Questions Melissa Asked

  1. We are seeing a big trend towards local living in COVID, how does eShopworld help deliver a sense of local for brands entering new markets? What do you see as the biggest, consistent challenges?
  2. How does your company improve customer lifetime value for your customers?
  3. One of the keys to success in 2020 and beyond, will be agility. How does your company help empower that?
  4. How long does it take to get a brand up and running?
  5. How do you see eShopworld bridging online and offline in the future, especially as we live in a drop-ship world?
  6. YETI is a coveted brand here in the US and your company helped them recently expand into Canada- what made that successful?
  7. We are not able to travel today like we once were but we will again, when we do what are the 3 must do-see things in Dublin for our listeners

Listen To Previous Episodes of Retail Refined Right Here!

Follow us on social media for the latest updates in B2B!

Image

Latest

GPU infrastructure
Amberd Moves to the Front of the Line With QumulusAI’s GPU Infrastructure
February 18, 2026

Reliable GPU infrastructure determines how quickly AI companies can execute. Teams developing private LLM platforms depend on consistent high-performance compute. Shared cloud environments often create delays when demand exceeds available capacity Mazda Marvasti, CEO of Amberd, says waiting for GPU capacity did not align with his company’s pace. Amberd required guaranteed availability to support…

Read More
private LLM
QumulusAI Secures Priority GPU Infrastructure Amid AWS Capacity Constraints on Private LLM Development
February 18, 2026

Developing a private large language model(LLM) on AWS can expose infrastructure constraints, particularly around GPU access. For smaller companies, securing consistent access to high-performance computing often proves difficult when competing with larger cloud customers. Mazda Marvasti, CEO of Amberd AI,  encountered these challenges while scaling his company’s AI platform. Because Amberd operates its own…

Read More
custom AI chips
Custom AI Chips Signal Segmentation for AI Teams, While NVIDIA Sets the Performance Ceiling for Cutting-Edge AI
February 18, 2026

Microsoft’s introduction of the Maia 200 adds to a growing list of hyperscaler-developed processors, alongside offerings from AWS and Google. These custom AI chips are largely designed to improve inference efficiency and optimize internal cost structures, though some platforms also support large-scale training. Google’s offering is currently the most mature, with a longer production…

Read More
GPUs
OpenAI–Cerebras Deal Signals Selective Inference Optimization, Not Replacement of GPUs
February 18, 2026

OpenAI’s partnership with Cerebras has raised questions about the future of GPUs in inference workloads. Cerebras uses a wafer-scale architecture that places an entire cluster onto a single silicon chip. This design reduces communication overhead and is built to improve latency and throughput for large-scale inference. Mark Jackson, Senior Product Manager at QumulusAI, says…

Read More