Student:Max Gittel

Abstract

Email:max.gittel@gmail.com 
Status:

FINISHED

Supervisor:

Files

Documentation

Categorized References:


**Generate Textures on Meshes without textures**:

* Text2Tex: Text-driven Texture Synthesis via Diffusion Models [PDF](https://arxiv.org/pdf/2303.11396
* TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion Models [PDF](https://arxiv.org/pdf/2310.13772)
* Texture Fields: Learning Texture Representations in Function Space [PDF](https://arxiv.org/pdf/1905.07259)
* TEXTure: Text-Guided Texturing of 3D Shapes [PDF](https://arxiv.org/pdf/2302.01721)
* Texturify: Generating Textures on 3D Shape Surfaces [PDF](https://arxiv.org/pdf/2204.02411)
* LATTE3D : Large-scale Amortized TextToEnhanced3D Synthesis [PDF](https://arxiv.org/pdf/2403.15385)
* Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering [PDF](https://arxiv.org/pdf/2312.11360)
* Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models [PDF](https://arxiv.org/pdf/2312.13913)

**Texture from Drawing:**

- 3D-aware Conditional Image Synthesis [PDF](https://arxiv.org/pdf/2302.08509)
* Deep3DSketch+: Obtaining Customized 3D Model by Single Free-Hand Sketch through Deep Learning [PDF](https://arxiv.org/pdf/2310.18609)

**3D Model from Images**

- Fine Detailed Texture Learning for 3D Meshes with Generative Models [PDF](https://arxiv.org/pdf/2203.09362)
* GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images [PDF](https://arxiv.org/pdf/2209.11163)
* Convolutional Generation of Textured 3D Meshes [PDF](https://arxiv.org/pdf/2006.07660)
* Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images [PDF](https://arxiv.org/pdf/1804.01654)

**3D Model From Text:**

* Zero-Shot Text-Guided Object Generation with Dream Fields [PDF](https://arxiv.org/pdf/2112.01455)
* Shap·E: Generating Conditional 3D Implicit Functions [PDF](https://arxiv.org/pdf/2305.02463)
- CLIP-Mesh: Generating textured meshes from text using pretrained image-text models [PDF](https://arxiv.org/pdf/2203.13333)
* Magic3D: High-Resolution Text-to-3D Content Creation [PDF](https://arxiv.org/pdf/2211.10440)
* Diffusion-Based Signed Distance Fields for 3D Shape Generation [PDF](https://openaccess.thecvf.com/content/CVPR2023/papers/Shim_Diffusion-Based_Signed_Distance_Fields_for_3D_Shape_Generation_CVPR_2023_paper.pdf)
* Dream3D: Zero-Shot Text-to-3D Synthesis Using 3D Shape Prior and Text-to-Image Diffusion Models [PDF](https://arxiv.org/pdf/2212.14704)

**View Reconstruction**

* Zero-1-to-3: Zero-shot One Image to 3D Object [PDF](https://arxiv.org/pdf/2303.11328)

**Model like Human**

* Modeling 3D Shapes by Reinforcement Learning [PDF](Modeling 3D Shapes by Reinforcement Learning)
* PolyGen: An Autoregressive Generative Model of 3D Meshes [PDF](https://arxiv.org/pdf/2002.10880)

**Darstellung**

* NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [PDF](https://arxiv.org/pdf/2003.08934)

**Map PBR Materials to Meshes using Images**

* PhotoShape: Photorealistic Materials for Large-Scale Shape Collections [PDF](https://arxiv.org/pdf/1809.09761)

**Extents Textures to Infinity**

* GramGAN: Deep 3D Texture Synthesis From 2D Exemplars [PDF](https://arxiv.org/pdf/2006.16112)

Gitlab Repository



Workflow

Start

  • Topic specification
  • Registration
  • Creation of a wiki page (supervisor)
  • Creation of a gitlab repository or branch
  • Access to lab and computers

Finalization

  • Check code base and data
  • Check documentation 
  • Provide an example notebook that describes the workflow/usage of your code (in your repo)
  • Proof read written composition
  • Submission of written composition
  • Keine Stichwörter