1e555d299SFrançois Tigeot /* 2e555d299SFrançois Tigeot * Copyright © 2011-2012 Intel Corporation 3e555d299SFrançois Tigeot * 4e555d299SFrançois Tigeot * Permission is hereby granted, free of charge, to any person obtaining a 5e555d299SFrançois Tigeot * copy of this software and associated documentation files (the "Software"), 6e555d299SFrançois Tigeot * to deal in the Software without restriction, including without limitation 7e555d299SFrançois Tigeot * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8e555d299SFrançois Tigeot * and/or sell copies of the Software, and to permit persons to whom the 9e555d299SFrançois Tigeot * Software is furnished to do so, subject to the following conditions: 10e555d299SFrançois Tigeot * 11e555d299SFrançois Tigeot * The above copyright notice and this permission notice (including the next 12e555d299SFrançois Tigeot * paragraph) shall be included in all copies or substantial portions of the 13e555d299SFrançois Tigeot * Software. 14e555d299SFrançois Tigeot * 15e555d299SFrançois Tigeot * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16e555d299SFrançois Tigeot * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17e555d299SFrançois Tigeot * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 18e555d299SFrançois Tigeot * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19e555d299SFrançois Tigeot * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 20e555d299SFrançois Tigeot * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS 21e555d299SFrançois Tigeot * IN THE SOFTWARE. 22e555d299SFrançois Tigeot * 23e555d299SFrançois Tigeot * Authors: 24e555d299SFrançois Tigeot * Ben Widawsky <ben@bwidawsk.net> 25e555d299SFrançois Tigeot * 26e555d299SFrançois Tigeot */ 27e555d299SFrançois Tigeot 28e555d299SFrançois Tigeot /* 29e555d299SFrançois Tigeot * This file implements HW context support. On gen5+ a HW context consists of an 30e555d299SFrançois Tigeot * opaque GPU object which is referenced at times of context saves and restores. 31e555d299SFrançois Tigeot * With RC6 enabled, the context is also referenced as the GPU enters and exists 32e555d299SFrançois Tigeot * from RC6 (GPU has it's own internal power context, except on gen5). Though 33e555d299SFrançois Tigeot * something like a context does exist for the media ring, the code only 34e555d299SFrançois Tigeot * supports contexts for the render ring. 35e555d299SFrançois Tigeot * 36e555d299SFrançois Tigeot * In software, there is a distinction between contexts created by the user, 37e555d299SFrançois Tigeot * and the default HW context. The default HW context is used by GPU clients 38e555d299SFrançois Tigeot * that do not request setup of their own hardware context. The default 39e555d299SFrançois Tigeot * context's state is never restored to help prevent programming errors. This 40e555d299SFrançois Tigeot * would happen if a client ran and piggy-backed off another clients GPU state. 41e555d299SFrançois Tigeot * The default context only exists to give the GPU some offset to load as the 42e555d299SFrançois Tigeot * current to invoke a save of the context we actually care about. In fact, the 43e555d299SFrançois Tigeot * code could likely be constructed, albeit in a more complicated fashion, to 44e555d299SFrançois Tigeot * never use the default context, though that limits the driver's ability to 45e555d299SFrançois Tigeot * swap out, and/or destroy other contexts. 46e555d299SFrançois Tigeot * 47e555d299SFrançois Tigeot * All other contexts are created as a request by the GPU client. These contexts 48e555d299SFrançois Tigeot * store GPU state, and thus allow GPU clients to not re-emit state (and 49e555d299SFrançois Tigeot * potentially query certain state) at any time. The kernel driver makes 50e555d299SFrançois Tigeot * certain that the appropriate commands are inserted. 51e555d299SFrançois Tigeot * 52e555d299SFrançois Tigeot * The context life cycle is semi-complicated in that context BOs may live 53e555d299SFrançois Tigeot * longer than the context itself because of the way the hardware, and object 54e555d299SFrançois Tigeot * tracking works. Below is a very crude representation of the state machine 55e555d299SFrançois Tigeot * describing the context life. 56e555d299SFrançois Tigeot * refcount pincount active 57e555d299SFrançois Tigeot * S0: initial state 0 0 0 58e555d299SFrançois Tigeot * S1: context created 1 0 0 59e555d299SFrançois Tigeot * S2: context is currently running 2 1 X 60e555d299SFrançois Tigeot * S3: GPU referenced, but not current 2 0 1 61e555d299SFrançois Tigeot * S4: context is current, but destroyed 1 1 0 62e555d299SFrançois Tigeot * S5: like S3, but destroyed 1 0 1 63e555d299SFrançois Tigeot * 64e555d299SFrançois Tigeot * The most common (but not all) transitions: 65e555d299SFrançois Tigeot * S0->S1: client creates a context 66e555d299SFrançois Tigeot * S1->S2: client submits execbuf with context 67e555d299SFrançois Tigeot * S2->S3: other clients submits execbuf with context 68e555d299SFrançois Tigeot * S3->S1: context object was retired 69e555d299SFrançois Tigeot * S3->S2: clients submits another execbuf 70e555d299SFrançois Tigeot * S2->S4: context destroy called with current context 71e555d299SFrançois Tigeot * S3->S5->S0: destroy path 72e555d299SFrançois Tigeot * S4->S5->S0: destroy path on current context 73e555d299SFrançois Tigeot * 74e555d299SFrançois Tigeot * There are two confusing terms used above: 75e555d299SFrançois Tigeot * The "current context" means the context which is currently running on the 76e555d299SFrançois Tigeot * GPU. The GPU has loaded it's state already and has stored away the gtt 77e555d299SFrançois Tigeot * offset of the BO. The GPU is not actively referencing the data at this 78e555d299SFrançois Tigeot * offset, but it will on the next context switch. The only way to avoid this 79e555d299SFrançois Tigeot * is to do a GPU reset. 80e555d299SFrançois Tigeot * 81e555d299SFrançois Tigeot * An "active context' is one which was previously the "current context" and is 82e555d299SFrançois Tigeot * on the active list waiting for the next context switch to occur. Until this 83e555d299SFrançois Tigeot * happens, the object must remain at the same gtt offset. It is therefore 84e555d299SFrançois Tigeot * possible to destroy a context, but it is still active. 85e555d299SFrançois Tigeot * 86e555d299SFrançois Tigeot */ 87e555d299SFrançois Tigeot 88e555d299SFrançois Tigeot #include <drm/drmP.h> 89e555d299SFrançois Tigeot #include <drm/i915_drm.h> 90e555d299SFrançois Tigeot #include "i915_drv.h" 91e555d299SFrançois Tigeot #include <linux/err.h> 92e555d299SFrançois Tigeot 93e555d299SFrançois Tigeot /* This is a HW constraint. The value below is the largest known requirement 94e555d299SFrançois Tigeot * I've seen in a spec to date, and that was a workaround for a non-shipping 95e555d299SFrançois Tigeot * part. It should be safe to decrease this, but it's more future proof as is. 96e555d299SFrançois Tigeot */ 97e555d299SFrançois Tigeot #define CONTEXT_ALIGN (64<<10) 98e555d299SFrançois Tigeot 99e555d299SFrançois Tigeot static struct i915_hw_context * 100e555d299SFrançois Tigeot i915_gem_context_get(struct drm_i915_file_private *file_priv, u32 id); 101e555d299SFrançois Tigeot static int do_switch(struct i915_hw_context *to); 102e555d299SFrançois Tigeot 103e555d299SFrançois Tigeot static int get_context_size(struct drm_device *dev) 104e555d299SFrançois Tigeot { 105e555d299SFrançois Tigeot struct drm_i915_private *dev_priv = dev->dev_private; 106e555d299SFrançois Tigeot int ret; 107e555d299SFrançois Tigeot u32 reg; 108e555d299SFrançois Tigeot 109e555d299SFrançois Tigeot switch (INTEL_INFO(dev)->gen) { 110e555d299SFrançois Tigeot case 6: 111e555d299SFrançois Tigeot reg = I915_READ(CXT_SIZE); 112e555d299SFrançois Tigeot ret = GEN6_CXT_TOTAL_SIZE(reg) * 64; 113e555d299SFrançois Tigeot break; 114e555d299SFrançois Tigeot case 7: 115e555d299SFrançois Tigeot reg = I915_READ(GEN7_CXT_SIZE); 116e555d299SFrançois Tigeot if (IS_HASWELL(dev)) 117e555d299SFrançois Tigeot ret = HSW_CXT_TOTAL_SIZE(reg) * 64; 118e555d299SFrançois Tigeot else 119e555d299SFrançois Tigeot ret = GEN7_CXT_TOTAL_SIZE(reg) * 64; 120e555d299SFrançois Tigeot break; 121e555d299SFrançois Tigeot default: 122e555d299SFrançois Tigeot BUG(); 123e555d299SFrançois Tigeot } 124e555d299SFrançois Tigeot 125e555d299SFrançois Tigeot return ret; 126e555d299SFrançois Tigeot } 127e555d299SFrançois Tigeot 128e555d299SFrançois Tigeot static void do_destroy(struct i915_hw_context *ctx) 129e555d299SFrançois Tigeot { 130e555d299SFrançois Tigeot if (ctx->file_priv) 131e555d299SFrançois Tigeot idr_remove(&ctx->file_priv->context_idr, ctx->id); 132e555d299SFrançois Tigeot 133e555d299SFrançois Tigeot drm_gem_object_unreference(&ctx->obj->base); 134158486a6SFrançois Tigeot kfree(ctx); 135e555d299SFrançois Tigeot } 136e555d299SFrançois Tigeot 137e555d299SFrançois Tigeot static struct i915_hw_context * 138e555d299SFrançois Tigeot create_hw_context(struct drm_device *dev, 139e555d299SFrançois Tigeot struct drm_i915_file_private *file_priv) 140e555d299SFrançois Tigeot { 141e555d299SFrançois Tigeot struct drm_i915_private *dev_priv = dev->dev_private; 142e555d299SFrançois Tigeot struct i915_hw_context *ctx; 143e555d299SFrançois Tigeot int ret, id; 144e555d299SFrançois Tigeot 145*159fc1d7SFrançois Tigeot ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); 146e555d299SFrançois Tigeot if (ctx == NULL) 147e555d299SFrançois Tigeot return ERR_PTR(-ENOMEM); 148e555d299SFrançois Tigeot 149e555d299SFrançois Tigeot ctx->obj = i915_gem_alloc_object(dev, dev_priv->hw_context_size); 150e555d299SFrançois Tigeot if (ctx->obj == NULL) { 151158486a6SFrançois Tigeot kfree(ctx); 152e555d299SFrançois Tigeot DRM_DEBUG_DRIVER("Context object allocated failed\n"); 153e555d299SFrançois Tigeot return ERR_PTR(-ENOMEM); 154e555d299SFrançois Tigeot } 155e555d299SFrançois Tigeot 156e555d299SFrançois Tigeot if (INTEL_INFO(dev)->gen >= 7) { 157e555d299SFrançois Tigeot ret = i915_gem_object_set_cache_level(ctx->obj, 158e555d299SFrançois Tigeot I915_CACHE_LLC_MLC); 159e555d299SFrançois Tigeot if (ret) 160e555d299SFrançois Tigeot goto err_out; 161e555d299SFrançois Tigeot } 162e555d299SFrançois Tigeot 163e555d299SFrançois Tigeot /* The ring associated with the context object is handled by the normal 164e555d299SFrançois Tigeot * object tracking code. We give an initial ring value simple to pass an 165e555d299SFrançois Tigeot * assertion in the context switch code. 166e555d299SFrançois Tigeot */ 167e555d299SFrançois Tigeot ctx->ring = &dev_priv->ring[RCS]; 168e555d299SFrançois Tigeot 169e555d299SFrançois Tigeot /* Default context will never have a file_priv */ 170e555d299SFrançois Tigeot if (file_priv == NULL) 171e555d299SFrançois Tigeot return ctx; 172e555d299SFrançois Tigeot 173e555d299SFrançois Tigeot ctx->file_priv = file_priv; 174e555d299SFrançois Tigeot 175e555d299SFrançois Tigeot again: 176e555d299SFrançois Tigeot if (idr_pre_get(&file_priv->context_idr, GFP_KERNEL) == 0) { 177e555d299SFrançois Tigeot ret = -ENOMEM; 178e555d299SFrançois Tigeot DRM_DEBUG_DRIVER("idr allocation failed\n"); 179e555d299SFrançois Tigeot goto err_out; 180e555d299SFrançois Tigeot } 181e555d299SFrançois Tigeot 182e555d299SFrançois Tigeot ret = idr_get_new_above(&file_priv->context_idr, ctx, 183e555d299SFrançois Tigeot DEFAULT_CONTEXT_ID + 1, &id); 184e555d299SFrançois Tigeot if (ret == 0) 185e555d299SFrançois Tigeot ctx->id = id; 186e555d299SFrançois Tigeot 187e555d299SFrançois Tigeot if (ret == -EAGAIN) 188e555d299SFrançois Tigeot goto again; 189e555d299SFrançois Tigeot else if (ret) 190e555d299SFrançois Tigeot goto err_out; 191e555d299SFrançois Tigeot 192e555d299SFrançois Tigeot return ctx; 193e555d299SFrançois Tigeot 194e555d299SFrançois Tigeot err_out: 195e555d299SFrançois Tigeot do_destroy(ctx); 196e555d299SFrançois Tigeot return ERR_PTR(ret); 197e555d299SFrançois Tigeot } 198e555d299SFrançois Tigeot 199e555d299SFrançois Tigeot static inline bool is_default_context(struct i915_hw_context *ctx) 200e555d299SFrançois Tigeot { 201e555d299SFrançois Tigeot return (ctx == ctx->ring->default_context); 202e555d299SFrançois Tigeot } 203e555d299SFrançois Tigeot 204e555d299SFrançois Tigeot /** 205e555d299SFrançois Tigeot * The default context needs to exist per ring that uses contexts. It stores the 206e555d299SFrançois Tigeot * context state of the GPU for applications that don't utilize HW contexts, as 207e555d299SFrançois Tigeot * well as an idle case. 208e555d299SFrançois Tigeot */ 209e555d299SFrançois Tigeot static int create_default_context(struct drm_i915_private *dev_priv) 210e555d299SFrançois Tigeot { 211e555d299SFrançois Tigeot struct i915_hw_context *ctx; 212e555d299SFrançois Tigeot int ret; 213e555d299SFrançois Tigeot 214e555d299SFrançois Tigeot DRM_LOCK_ASSERT(dev_priv->dev); 215e555d299SFrançois Tigeot 216e555d299SFrançois Tigeot ctx = create_hw_context(dev_priv->dev, NULL); 217e555d299SFrançois Tigeot if (IS_ERR(ctx)) 218e555d299SFrançois Tigeot return PTR_ERR(ctx); 219e555d299SFrançois Tigeot 220e555d299SFrançois Tigeot /* We may need to do things with the shrinker which require us to 221e555d299SFrançois Tigeot * immediately switch back to the default context. This can cause a 222e555d299SFrançois Tigeot * problem as pinning the default context also requires GTT space which 223e555d299SFrançois Tigeot * may not be available. To avoid this we always pin the 224e555d299SFrançois Tigeot * default context. 225e555d299SFrançois Tigeot */ 226e555d299SFrançois Tigeot dev_priv->ring[RCS].default_context = ctx; 227b00bc81cSFrançois Tigeot ret = i915_gem_object_pin(ctx->obj, CONTEXT_ALIGN, false, false); 228e555d299SFrançois Tigeot if (ret) 229e555d299SFrançois Tigeot goto err_destroy; 230e555d299SFrançois Tigeot 231e555d299SFrançois Tigeot ret = do_switch(ctx); 232e555d299SFrançois Tigeot if (ret) 233e555d299SFrançois Tigeot goto err_unpin; 234e555d299SFrançois Tigeot 235e555d299SFrançois Tigeot DRM_DEBUG_DRIVER("Default HW context loaded\n"); 236e555d299SFrançois Tigeot return 0; 237e555d299SFrançois Tigeot 238e555d299SFrançois Tigeot err_unpin: 239e555d299SFrançois Tigeot i915_gem_object_unpin(ctx->obj); 240e555d299SFrançois Tigeot err_destroy: 241e555d299SFrançois Tigeot do_destroy(ctx); 242e555d299SFrançois Tigeot return ret; 243e555d299SFrançois Tigeot } 244e555d299SFrançois Tigeot 245e555d299SFrançois Tigeot void i915_gem_context_init(struct drm_device *dev) 246e555d299SFrançois Tigeot { 247e555d299SFrançois Tigeot struct drm_i915_private *dev_priv = dev->dev_private; 248e555d299SFrançois Tigeot 249e555d299SFrançois Tigeot if (!HAS_HW_CONTEXTS(dev)) { 250e555d299SFrançois Tigeot dev_priv->hw_contexts_disabled = true; 251e555d299SFrançois Tigeot return; 252e555d299SFrançois Tigeot } 253e555d299SFrançois Tigeot 254e555d299SFrançois Tigeot /* If called from reset, or thaw... we've been here already */ 255e555d299SFrançois Tigeot if (dev_priv->hw_contexts_disabled || 256e555d299SFrançois Tigeot dev_priv->ring[RCS].default_context) 257e555d299SFrançois Tigeot return; 258e555d299SFrançois Tigeot 259a2fdbec6SFrançois Tigeot dev_priv->hw_context_size = round_up(get_context_size(dev), 4096); 260e555d299SFrançois Tigeot 261a2fdbec6SFrançois Tigeot if (dev_priv->hw_context_size > (1<<20)) { 262e555d299SFrançois Tigeot dev_priv->hw_contexts_disabled = true; 263e555d299SFrançois Tigeot return; 264e555d299SFrançois Tigeot } 265e555d299SFrançois Tigeot 266e555d299SFrançois Tigeot if (create_default_context(dev_priv)) { 267e555d299SFrançois Tigeot dev_priv->hw_contexts_disabled = true; 268e555d299SFrançois Tigeot return; 269e555d299SFrançois Tigeot } 270e555d299SFrançois Tigeot 271e555d299SFrançois Tigeot DRM_DEBUG_DRIVER("HW context support initialized\n"); 272e555d299SFrançois Tigeot } 273e555d299SFrançois Tigeot 274e555d299SFrançois Tigeot void i915_gem_context_fini(struct drm_device *dev) 275e555d299SFrançois Tigeot { 276e555d299SFrançois Tigeot struct drm_i915_private *dev_priv = dev->dev_private; 277e555d299SFrançois Tigeot 278e555d299SFrançois Tigeot if (dev_priv->hw_contexts_disabled) 279e555d299SFrançois Tigeot return; 280e555d299SFrançois Tigeot 281e555d299SFrançois Tigeot /* The only known way to stop the gpu from accessing the hw context is 282e555d299SFrançois Tigeot * to reset it. Do this as the very last operation to avoid confusing 283e555d299SFrançois Tigeot * other code, leading to spurious errors. */ 284e555d299SFrançois Tigeot intel_gpu_reset(dev); 285e555d299SFrançois Tigeot 286e555d299SFrançois Tigeot i915_gem_object_unpin(dev_priv->ring[RCS].default_context->obj); 287e555d299SFrançois Tigeot 288e555d299SFrançois Tigeot do_destroy(dev_priv->ring[RCS].default_context); 289e555d299SFrançois Tigeot } 290e555d299SFrançois Tigeot 291e555d299SFrançois Tigeot static int context_idr_cleanup(int id, void *p, void *data) 292e555d299SFrançois Tigeot { 293e555d299SFrançois Tigeot struct i915_hw_context *ctx = p; 294e555d299SFrançois Tigeot 295e555d299SFrançois Tigeot BUG_ON(id == DEFAULT_CONTEXT_ID); 296e555d299SFrançois Tigeot 297e555d299SFrançois Tigeot do_destroy(ctx); 298e555d299SFrançois Tigeot 299e555d299SFrançois Tigeot return 0; 300e555d299SFrançois Tigeot } 301e555d299SFrançois Tigeot 302e555d299SFrançois Tigeot void i915_gem_context_close(struct drm_device *dev, struct drm_file *file) 303e555d299SFrançois Tigeot { 304e555d299SFrançois Tigeot struct drm_i915_file_private *file_priv = file->driver_priv; 305e555d299SFrançois Tigeot 306a2fdbec6SFrançois Tigeot mutex_lock(&dev->struct_mutex); 307e555d299SFrançois Tigeot idr_for_each(&file_priv->context_idr, context_idr_cleanup, NULL); 308e555d299SFrançois Tigeot idr_destroy(&file_priv->context_idr); 309a2fdbec6SFrançois Tigeot mutex_unlock(&dev->struct_mutex); 310e555d299SFrançois Tigeot } 311e555d299SFrançois Tigeot 312e555d299SFrançois Tigeot static struct i915_hw_context * 313e555d299SFrançois Tigeot i915_gem_context_get(struct drm_i915_file_private *file_priv, u32 id) 314e555d299SFrançois Tigeot { 315e555d299SFrançois Tigeot return (struct i915_hw_context *)idr_find(&file_priv->context_idr, id); 316e555d299SFrançois Tigeot } 317e555d299SFrançois Tigeot 318e555d299SFrançois Tigeot static inline int 319e555d299SFrançois Tigeot mi_set_context(struct intel_ring_buffer *ring, 320e555d299SFrançois Tigeot struct i915_hw_context *new_context, 321e555d299SFrançois Tigeot u32 hw_flags) 322e555d299SFrançois Tigeot { 323e555d299SFrançois Tigeot int ret; 324e555d299SFrançois Tigeot 325e555d299SFrançois Tigeot /* w/a: If Flush TLB Invalidation Mode is enabled, driver must do a TLB 326e555d299SFrançois Tigeot * invalidation prior to MI_SET_CONTEXT. On GEN6 we don't set the value 327e555d299SFrançois Tigeot * explicitly, so we rely on the value at ring init, stored in 328e555d299SFrançois Tigeot * itlb_before_ctx_switch. 329e555d299SFrançois Tigeot */ 330e555d299SFrançois Tigeot if (IS_GEN6(ring->dev) && ring->itlb_before_ctx_switch) { 331e555d299SFrançois Tigeot ret = ring->flush(ring, I915_GEM_GPU_DOMAINS, 0); 332e555d299SFrançois Tigeot if (ret) 333e555d299SFrançois Tigeot return ret; 334e555d299SFrançois Tigeot } 335e555d299SFrançois Tigeot 336e555d299SFrançois Tigeot ret = intel_ring_begin(ring, 6); 337e555d299SFrançois Tigeot if (ret) 338e555d299SFrançois Tigeot return ret; 339e555d299SFrançois Tigeot 340e555d299SFrançois Tigeot if (IS_GEN7(ring->dev)) 341e555d299SFrançois Tigeot intel_ring_emit(ring, MI_ARB_ON_OFF | MI_ARB_DISABLE); 342e555d299SFrançois Tigeot else 343e555d299SFrançois Tigeot intel_ring_emit(ring, MI_NOOP); 344e555d299SFrançois Tigeot 345e555d299SFrançois Tigeot intel_ring_emit(ring, MI_NOOP); 346e555d299SFrançois Tigeot intel_ring_emit(ring, MI_SET_CONTEXT); 347e555d299SFrançois Tigeot intel_ring_emit(ring, new_context->obj->gtt_offset | 348e555d299SFrançois Tigeot MI_MM_SPACE_GTT | 349e555d299SFrançois Tigeot MI_SAVE_EXT_STATE_EN | 350e555d299SFrançois Tigeot MI_RESTORE_EXT_STATE_EN | 351e555d299SFrançois Tigeot hw_flags); 352e555d299SFrançois Tigeot /* w/a: MI_SET_CONTEXT must always be followed by MI_NOOP */ 353e555d299SFrançois Tigeot intel_ring_emit(ring, MI_NOOP); 354e555d299SFrançois Tigeot 355e555d299SFrançois Tigeot if (IS_GEN7(ring->dev)) 356e555d299SFrançois Tigeot intel_ring_emit(ring, MI_ARB_ON_OFF | MI_ARB_ENABLE); 357e555d299SFrançois Tigeot else 358e555d299SFrançois Tigeot intel_ring_emit(ring, MI_NOOP); 359e555d299SFrançois Tigeot 360e555d299SFrançois Tigeot intel_ring_advance(ring); 361e555d299SFrançois Tigeot 362e555d299SFrançois Tigeot return ret; 363e555d299SFrançois Tigeot } 364e555d299SFrançois Tigeot 365e555d299SFrançois Tigeot static int do_switch(struct i915_hw_context *to) 366e555d299SFrançois Tigeot { 367e555d299SFrançois Tigeot struct intel_ring_buffer *ring = to->ring; 368e555d299SFrançois Tigeot struct drm_i915_gem_object *from_obj = ring->last_context_obj; 369e555d299SFrançois Tigeot u32 hw_flags = 0; 370e555d299SFrançois Tigeot int ret; 371e555d299SFrançois Tigeot 372e555d299SFrançois Tigeot BUG_ON(from_obj != NULL && from_obj->pin_count == 0); 373e555d299SFrançois Tigeot 374e555d299SFrançois Tigeot if (from_obj == to->obj) 375e555d299SFrançois Tigeot return 0; 376e555d299SFrançois Tigeot 377b00bc81cSFrançois Tigeot ret = i915_gem_object_pin(to->obj, CONTEXT_ALIGN, false, false); 378e555d299SFrançois Tigeot if (ret) 379e555d299SFrançois Tigeot return ret; 380e555d299SFrançois Tigeot 381e555d299SFrançois Tigeot /* Clear this page out of any CPU caches for coherent swap-in/out. Note 382e555d299SFrançois Tigeot * that thanks to write = false in this call and us not setting any gpu 383e555d299SFrançois Tigeot * write domains when putting a context object onto the active list 384e555d299SFrançois Tigeot * (when switching away from it), this won't block. 385e555d299SFrançois Tigeot * XXX: We need a real interface to do this instead of trickery. */ 386e555d299SFrançois Tigeot ret = i915_gem_object_set_to_gtt_domain(to->obj, false); 387e555d299SFrançois Tigeot if (ret) { 388e555d299SFrançois Tigeot i915_gem_object_unpin(to->obj); 389e555d299SFrançois Tigeot return ret; 390e555d299SFrançois Tigeot } 391e555d299SFrançois Tigeot 392e555d299SFrançois Tigeot if (!to->obj->has_global_gtt_mapping) 393e555d299SFrançois Tigeot i915_gem_gtt_bind_object(to->obj, to->obj->cache_level); 394e555d299SFrançois Tigeot 395e555d299SFrançois Tigeot if (!to->is_initialized || is_default_context(to)) 396e555d299SFrançois Tigeot hw_flags |= MI_RESTORE_INHIBIT; 397e555d299SFrançois Tigeot else if (WARN_ON_ONCE(from_obj == to->obj)) /* not yet expected */ 398e555d299SFrançois Tigeot hw_flags |= MI_FORCE_RESTORE; 399e555d299SFrançois Tigeot 400e555d299SFrançois Tigeot ret = mi_set_context(ring, to, hw_flags); 401e555d299SFrançois Tigeot if (ret) { 402e555d299SFrançois Tigeot i915_gem_object_unpin(to->obj); 403e555d299SFrançois Tigeot return ret; 404e555d299SFrançois Tigeot } 405e555d299SFrançois Tigeot 406e555d299SFrançois Tigeot /* The backing object for the context is done after switching to the 407e555d299SFrançois Tigeot * *next* context. Therefore we cannot retire the previous context until 408e555d299SFrançois Tigeot * the next context has already started running. In fact, the below code 409e555d299SFrançois Tigeot * is a bit suboptimal because the retiring can occur simply after the 410e555d299SFrançois Tigeot * MI_SET_CONTEXT instead of when the next seqno has completed. 411e555d299SFrançois Tigeot */ 412e555d299SFrançois Tigeot if (from_obj != NULL) { 413e555d299SFrançois Tigeot from_obj->base.read_domains = I915_GEM_DOMAIN_INSTRUCTION; 414e555d299SFrançois Tigeot i915_gem_object_move_to_active(from_obj, ring); 415e555d299SFrançois Tigeot /* As long as MI_SET_CONTEXT is serializing, ie. it flushes the 416e555d299SFrançois Tigeot * whole damn pipeline, we don't need to explicitly mark the 417e555d299SFrançois Tigeot * object dirty. The only exception is that the context must be 418e555d299SFrançois Tigeot * correct in case the object gets swapped out. Ideally we'd be 419e555d299SFrançois Tigeot * able to defer doing this until we know the object would be 420e555d299SFrançois Tigeot * swapped, but there is no way to do that yet. 421e555d299SFrançois Tigeot */ 422e555d299SFrançois Tigeot from_obj->dirty = 1; 423e555d299SFrançois Tigeot BUG_ON(from_obj->ring != ring); 424e555d299SFrançois Tigeot i915_gem_object_unpin(from_obj); 425e555d299SFrançois Tigeot 426e555d299SFrançois Tigeot drm_gem_object_unreference(&from_obj->base); 427e555d299SFrançois Tigeot } 428e555d299SFrançois Tigeot 429e555d299SFrançois Tigeot drm_gem_object_reference(&to->obj->base); 430e555d299SFrançois Tigeot ring->last_context_obj = to->obj; 431e555d299SFrançois Tigeot to->is_initialized = true; 432e555d299SFrançois Tigeot 433e555d299SFrançois Tigeot return 0; 434e555d299SFrançois Tigeot } 435e555d299SFrançois Tigeot 436e555d299SFrançois Tigeot /** 437e555d299SFrançois Tigeot * i915_switch_context() - perform a GPU context switch. 438e555d299SFrançois Tigeot * @ring: ring for which we'll execute the context switch 439e555d299SFrançois Tigeot * @file_priv: file_priv associated with the context, may be NULL 440e555d299SFrançois Tigeot * @id: context id number 441e555d299SFrançois Tigeot * @seqno: sequence number by which the new context will be switched to 442e555d299SFrançois Tigeot * @flags: 443e555d299SFrançois Tigeot * 444e555d299SFrançois Tigeot * The context life cycle is simple. The context refcount is incremented and 445e555d299SFrançois Tigeot * decremented by 1 and create and destroy. If the context is in use by the GPU, 446e555d299SFrançois Tigeot * it will have a refoucnt > 1. This allows us to destroy the context abstract 447e555d299SFrançois Tigeot * object while letting the normal object tracking destroy the backing BO. 448e555d299SFrançois Tigeot */ 449e555d299SFrançois Tigeot int i915_switch_context(struct intel_ring_buffer *ring, 450e555d299SFrançois Tigeot struct drm_file *file, 451e555d299SFrançois Tigeot int to_id) 452e555d299SFrançois Tigeot { 453e555d299SFrançois Tigeot struct drm_i915_private *dev_priv = ring->dev->dev_private; 454e555d299SFrançois Tigeot struct i915_hw_context *to; 455e555d299SFrançois Tigeot 456e555d299SFrançois Tigeot if (dev_priv->hw_contexts_disabled) 457e555d299SFrançois Tigeot return 0; 458e555d299SFrançois Tigeot 459e555d299SFrançois Tigeot if (ring != &dev_priv->ring[RCS]) 460e555d299SFrançois Tigeot return 0; 461e555d299SFrançois Tigeot 462e555d299SFrançois Tigeot if (to_id == DEFAULT_CONTEXT_ID) { 463e555d299SFrançois Tigeot to = ring->default_context; 464e555d299SFrançois Tigeot } else { 465e555d299SFrançois Tigeot if (file == NULL) 466e555d299SFrançois Tigeot return -EINVAL; 467e555d299SFrançois Tigeot 468e555d299SFrançois Tigeot to = i915_gem_context_get(file->driver_priv, to_id); 469e555d299SFrançois Tigeot if (to == NULL) 470e555d299SFrançois Tigeot return -ENOENT; 471e555d299SFrançois Tigeot } 472e555d299SFrançois Tigeot 473e555d299SFrançois Tigeot return do_switch(to); 474e555d299SFrançois Tigeot } 475e555d299SFrançois Tigeot 476e555d299SFrançois Tigeot int i915_gem_context_create_ioctl(struct drm_device *dev, void *data, 477e555d299SFrançois Tigeot struct drm_file *file) 478e555d299SFrançois Tigeot { 479e555d299SFrançois Tigeot struct drm_i915_private *dev_priv = dev->dev_private; 480e555d299SFrançois Tigeot struct drm_i915_gem_context_create *args = data; 481e555d299SFrançois Tigeot struct drm_i915_file_private *file_priv = file->driver_priv; 482e555d299SFrançois Tigeot struct i915_hw_context *ctx; 483e555d299SFrançois Tigeot int ret; 484e555d299SFrançois Tigeot 485e555d299SFrançois Tigeot if (!(dev->driver->driver_features & DRIVER_GEM)) 486e555d299SFrançois Tigeot return -ENODEV; 487e555d299SFrançois Tigeot 488e555d299SFrançois Tigeot if (dev_priv->hw_contexts_disabled) 489e555d299SFrançois Tigeot return -ENODEV; 490e555d299SFrançois Tigeot 491e555d299SFrançois Tigeot ret = i915_mutex_lock_interruptible(dev); 492e555d299SFrançois Tigeot if (ret) 493e555d299SFrançois Tigeot return ret; 494e555d299SFrançois Tigeot 495e555d299SFrançois Tigeot ctx = create_hw_context(dev, file_priv); 496a2fdbec6SFrançois Tigeot mutex_unlock(&dev->struct_mutex); 497e555d299SFrançois Tigeot if (IS_ERR(ctx)) 498e555d299SFrançois Tigeot return PTR_ERR(ctx); 499e555d299SFrançois Tigeot 500e555d299SFrançois Tigeot args->ctx_id = ctx->id; 501e555d299SFrançois Tigeot DRM_DEBUG_DRIVER("HW context %d created\n", args->ctx_id); 502e555d299SFrançois Tigeot 503e555d299SFrançois Tigeot return 0; 504e555d299SFrançois Tigeot } 505e555d299SFrançois Tigeot 506e555d299SFrançois Tigeot int i915_gem_context_destroy_ioctl(struct drm_device *dev, void *data, 507e555d299SFrançois Tigeot struct drm_file *file) 508e555d299SFrançois Tigeot { 509e555d299SFrançois Tigeot struct drm_i915_gem_context_destroy *args = data; 510e555d299SFrançois Tigeot struct drm_i915_file_private *file_priv = file->driver_priv; 511e555d299SFrançois Tigeot struct i915_hw_context *ctx; 512e555d299SFrançois Tigeot int ret; 513e555d299SFrançois Tigeot 514e555d299SFrançois Tigeot if (!(dev->driver->driver_features & DRIVER_GEM)) 515e555d299SFrançois Tigeot return -ENODEV; 516e555d299SFrançois Tigeot 517e555d299SFrançois Tigeot ret = i915_mutex_lock_interruptible(dev); 518e555d299SFrançois Tigeot if (ret) 519e555d299SFrançois Tigeot return ret; 520e555d299SFrançois Tigeot 521e555d299SFrançois Tigeot ctx = i915_gem_context_get(file_priv, args->ctx_id); 522e555d299SFrançois Tigeot if (!ctx) { 523a2fdbec6SFrançois Tigeot mutex_unlock(&dev->struct_mutex); 524e555d299SFrançois Tigeot return -ENOENT; 525e555d299SFrançois Tigeot } 526e555d299SFrançois Tigeot 527e555d299SFrançois Tigeot do_destroy(ctx); 528e555d299SFrançois Tigeot 529a2fdbec6SFrançois Tigeot mutex_unlock(&dev->struct_mutex); 530e555d299SFrançois Tigeot 531e555d299SFrançois Tigeot DRM_DEBUG_DRIVER("HW context %d destroyed\n", args->ctx_id); 532e555d299SFrançois Tigeot return 0; 533e555d299SFrançois Tigeot } 534