How Cloud to Edge Technology Helps Handle the Immense Amount of Data Generated By Practitioners
Healthcare organizations generate massive amounts of data, so much so that the challenge becomes how and where to move it and store it.
Health and Life Science at the Edge host, Gabrielle Bejarano, spoke with Zettar’s Chin Fang and Intel’s Michael McManus for a peek inside the technology solutions Zettar and Intel are partnering on to advance the challenge of data movement, processing, and storage for healthcare organizations.
Genomics files are a perfect example of the type of sizable data files that life science companies are creating. “When you go to genomics, for example, genomics files, a whole genome for a single person is about 350 gigabytes,” McManus says. “So, if you’re sequencing many people, you multiply however many people, you’re sequencing times 350 gigabytes for your storage planning purpose.” And he says cryoelectronic microscopy data sets are multiple terabytes in sizes. The data is broken into single gigabyte movies, and the average microscope can create thousands of those an hour. And then, the data requires analysis.
Chin adds there are some institutions Zettar works with whose processes generate files as big as four terabytes with rates of one terabyte per second that requires aggressive data reduction to reduce that to the gross rate of 800 Gbps, or 100 gigabytes per second.
These huge datasets cause a myriad of problems for technicians, scientists, and IT staff, including wasted time, frustration, and a loss of productivity.
Standard data-moving practices can’t handle these complex situations, so Zettar’s approach utilizes an efficient unified data-mover model. “With Intel’s help, we developed this real-time capability, so now we make the real-time data movement readily available to anyone who needs it, and then this will have great benefits to accelerate the healthcare and life sciences research efforts going forward,” Chin says.
Follow us on social media for the latest updates in B2B!