Scalable systems for machine learning (ML) are largely siloed into dataflow systems for structured data and deep learning systems for unstructured data. This gap has left workloads that jointly analyze both forms of data with poor systems support, leading to both low system efficiency and grunt work for users. We bridge this gap for an important class of such workloads: feature transfer from deep convolutional neural networks (CNNs) for analyzing images along with structured data. Executing feature transfer on scalable dataflow and deep learning systems today faces two key systems issues: inefficiency due to redundant computations and crash-proneness due to mismanaged memory. We present Vista, a new data system that resolves these issues by elevating this workload to a declarative level on top of dataflow and deep learning systems. Vista automatically optimizes the configuration and execution of this workload to reduce both computational redundancy and the potential for workload crashes. Experiments on real datasets show that apart from making feature transfer easier, Vista avoids workload crashes and reduces runtimes even up to 10X compared to baselines.