The convergence of flow field generation with deep learning for augmenting the efficiency of flow simulation is a prominent pursuit. Current approaches primarily utilize conditional generative adversarial networks(cGANs) to generate velocity fields guided by sketches. In contrast, the cGAN training process exhibits instability. In this study, we propose a novel 2D velocity field design and generation framework that leverages the latent diffusion model (LDM). The sketch is a constraining condition that guides the denoising process within LDM and 2D velocity field reconstruction. Our framework is proficient in generating velocity fields that align with the shape of given sketches. We verified the robustness of the proposed framework in comparison to cGAN-based methods.
In this study, we propose an efficient city-generation method based on user sketches. The proposed framework combines Conditional Generative Adversarial Networks(cGAN) and procedural modeling, which we call the Neurosymbolic Model. For cGAN training, the data set needs to consist of linked input and output pairs, so first the building of random height is generated using Perlin noise as the training data set. Then, the building contours are extracted by morphological transformation. For training, we use pairs of height maps created from the city data and sketches extracted by morphological transformation. Allowing users to generate diverse and satisfying cities from freehand sketches.
It is difficult to design a visually appealing layout for common users, which takes time even for professional designers. In this paper, we present an interactive layout design system with shadow guidance and layout retrieval to help users obtain satisfactory design results. This study focuses in particular on the design of academic presentation slides. The user may refer to the shadow guidance as a heat map, which is the layout distribution of our gathered dataset, using the suggested shadow guidance. The suggested system is data-driven, allowing users to analyze the design data naturally. The layout may then be edited by the user to finalize the layout design. We validated the suggested interface in our user study by comparing it with common design interfaces. The findings show that the suggested interface may achieve high retrieval accuracy while simultaneously offering a pleasant user experience.
We propose an interactive 3D character modeling approach from orthographic drawings (e.g., front and side views) based on 2D-space annotations. First, the system builds partial correspondences between the drawings and generates a base mesh with sweeping splines according to edge information in 2D images. Next, users annotates the desired parts on the input drawings (e.g., eye and mouth) by drawing two-type strokes, named addition and erosion, and the system re-optimizes the shape of the base mesh by using the annotations. By repeating the 2D-space operations (i.e., revising and modifying the annotations), users can design a desired character model. To validate the efficiency and quality of our system, we verify the generated results with state-of-the-art methods.
KEYWORDS: 3D modeling, 3D scanning, Data modeling, Augmented reality, Telecommunications, Light sources, Solid modeling, Cameras, Prototyping, Projection systems
A shadow implicitly represents the existence of a human or object for various applications in interaction and media art designs. However, it is challenging to generate a natural shadow in artificial for a spatial augmented reality where the conventional approaches ignore the object dynamics and automatic control. In this work, we propose an interactive shadow generation system that creates the interactive shadow of users using a projector-camera system. With the offline processes of human mesh modeling and virtual environment registration, the proposed system rigs the 3D model created by scanning the user to generate the shadow. Finally, the generated shadow is projected into the real environment. We verify the usability of the proposed system and the impression of the generated shadow from our user study.
KEYWORDS: 3D modeling, Clouds, Data modeling, Human-machine interfaces, Cognitive modeling, Interfaces, 3D acquisition, Image retrieval, Image filtering, 3D image processing
3D modeling based on point clouds is an efficient way to reconstruct and create detailed 3D content. However, the geometric procedure may lose accuracy due to high redundancy and no explicit structure. In this work, we propose a human-in-the-loop sketch-based point clouds reconstruction framework to leverage the users’ cognitive ability in geometry extraction. We present an interactive drawing interface for 3D model creations from point cloud data with the help of user sketches. We adopt an optimization method that the user can continuously edit the contours extracted from the obtained 3D model and retrieve the model iteratively. Finally, we verify the proposed user interface for modeling from sparse point clouds.
With the emergence of large-scale open online courses and online academic conferences, it has become increasingly feasible and convenient to access online educational resources. However, it is time consuming and challenging to effectively retrieve and present numerous lecture videos for common users. In this work, we propose a hierarchical visual interface for retrieving and summarizing lecture videos. Users can utilize the proposed interface to effectively explore the required video information through the results of the video summary generation in different layers. We retrieve the input keywords with the corresponding video layer with timestamps, a frame layer with slides, and the poster layer with summarization of the lecture videos. We verified the proposed interface with our user study by comparing it with other conventional interfaces. The results from our user study confirmed that the proposed interface can achieve high retrieval accuracy and good user experience.
KEYWORDS: 3D modeling, Animal model studies, Data modeling, 3D image processing, Image retrieval, 3D displays, Human-machine interfaces, 3D vision, Image processing, Switches
In this work, we propose an interactive drawing guidance interface with 3D animal model retrieval, which aims to help common users draw 2D animal sketches by exploring the desired animal models from the pre-collected dataset. We first construct an animal model dataset and generate line drawing images of 3D models from different viewpoints. Then, we develop the drawing interface, which illustrates the retrieval models through matching freehand sketch inputs with line drawing images. We utilize the state-of-the-art sketch-based image retrieval algorithm for sketch matching, which describes the appearance and relative positions of multiple objects by measuring compositional similarity. The proposed system can accurately retrieve similar partial images and provide the blended shadow guidance underlying the user’s strokes to guide the drawing process. We verified that the proposed interface could improve the drawing quality of users’ animal sketches from our user study.
In this paper, we consider the problem of affine subspace clustering, which requires to estimate the corresponding subspaces and assign the corresponding labels to data points on or near a union of low-dimensional affine subspaces. To address this problem, we propose a framework based on Nearest Subspace Neighbor (NSN). NSN is originally designed to estimate the geometric structure of clusters that can not be adequately performed by conventional approaches based on a general distance metric such as K-means, and has been applied in solving the linear subspace clustering problem. However, in real-world scenarios, the vast majority of data exist in the affine subspace rather than linear subspaces. To make better use of NSN, we construct an affinity matrix by incrementally picking the points considering affine subspaces in a greedy fashion. Statistical experiments demonstrate that our method outperforms both the original NSN and an affine subspace clustering method.
Normal map is an important and efficient way to represent complex 3D models. A designer may benefit from the auto-generation of high quality and accurate normal maps from freehand sketches in 3d content creation. This paper proposes a deep generative model for generating normal maps from users’ sketch with geometric sampling. Our generative model is based on conditional generative adversarial network with the curvature-sensitive points sampling of conditional masks. This sampling process can help eliminate the ambiguity of generation results as network input. It is verified that the proposed framework can generate more accurate normal maps.
Rapid progress has been made in both augmented and virtual reality technologies. However, it is still challenging to seamlessly connect the virtual world with the real world, such as locating virtual three-dimensional models in the real environment and directly interacting with them. In this study, we propose a wearable augmented reality system with a proposed head-mounted device to enable the projection of anamorphic images. The proposed system can track the user's head movements, and then project the designated scene in real space in real time. To achieve this goal, our system consists of three steps: room scaling, blur correction for the projected contents, and calibration using dynamic mesh generation. We evaluated the proposed system by the interaction with virtual contents and characters. The evaluation results showed that the interaction with the virtual character was highly evaluated. This system can be used for a wide range of applications in daily life and entertainment, such as relieving loneliness and serving as a guide in museums.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.