Introduction: Computer vision methods may augment endoscopic treatment of upper tract urothelial carcinoma by improving tumor visualization and tracking ablated tissue. We sought to use computer vision techniques to automatically detect urothelial tumors as well as ablated tissue in real-time during ureteroscopy. Methods: Eight separate surgical videos of urothelial tumor diagnosis and ablation via ureteroscopy were recorded using digital ureteroscopes. We isolated clips from each showing (a) initial tumor identification, and (b) tumor ablation. Frames from each clip were extracted at 3 frames per second (fps) with a total of 1087 extracted frames. All frames were manually annotated by one urologist to identify urothelial tumors and ablated tissue. Frames from 6 videos (n=870, 80%) were used to train a computer vision model (U-Net++). The remaining 217 frames (20% from the remaining 2 videos) were reserved as a test set and automatically segmented by the model. We compared model segmentation performance to manual annotation of urothelial tumors and ablated tissue via area under the receiver operating characteristic curve (AUC-ROC), accuracy (per pixel), and Dice similarity coefficient (DSC). Results: We identified 23 different tumors in 8 separate videos. All tumors were ablated using holmium laser. Mean duration of video clips of tumor identification and tumor ablation were 37s± 40s and 27s±18s, respectively. The model was able to segment upper tract tumors with good performance with an AUC-ROC of 0.78, accuracy of 0.95, and DSC of 0.65. Similarly, the model was able to segment ablated tissue well with an AUC-ROC of 0.86, accuracy of 0.88 and DSC of 0.463. We implemented a working system for the processing of real-time video feeds with overlaid model predictions (Fig. 1). The models were able to annotate new videos at 30 fps. Conclusions: Computer vision models demonstrate good performance for automatic segmentation of upper tract urothelial tumor and ablated tissue during ureteroscopy. The models demonstrate feasibility of application in real-time. With further optimization, these models could augment endoscopic vision, improving endoscopic tumor ablation. SOURCE OF Funding: none.