1e555d299SFrançois Tigeot /* 2e555d299SFrançois Tigeot * Copyright © 2011-2012 Intel Corporation 3e555d299SFrançois Tigeot * 4e555d299SFrançois Tigeot * Permission is hereby granted, free of charge, to any person obtaining a 5e555d299SFrançois Tigeot * copy of this software and associated documentation files (the "Software"), 6e555d299SFrançois Tigeot * to deal in the Software without restriction, including without limitation 7e555d299SFrançois Tigeot * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8e555d299SFrançois Tigeot * and/or sell copies of the Software, and to permit persons to whom the 9e555d299SFrançois Tigeot * Software is furnished to do so, subject to the following conditions: 10e555d299SFrançois Tigeot * 11e555d299SFrançois Tigeot * The above copyright notice and this permission notice (including the next 12e555d299SFrançois Tigeot * paragraph) shall be included in all copies or substantial portions of the 13e555d299SFrançois Tigeot * Software. 14e555d299SFrançois Tigeot * 15e555d299SFrançois Tigeot * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16e555d299SFrançois Tigeot * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17e555d299SFrançois Tigeot * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 18e555d299SFrançois Tigeot * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19e555d299SFrançois Tigeot * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 20e555d299SFrançois Tigeot * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS 21e555d299SFrançois Tigeot * IN THE SOFTWARE. 22e555d299SFrançois Tigeot * 23e555d299SFrançois Tigeot * Authors: 24e555d299SFrançois Tigeot * Ben Widawsky <ben@bwidawsk.net> 25e555d299SFrançois Tigeot * 26e555d299SFrançois Tigeot */ 27e555d299SFrançois Tigeot 28e555d299SFrançois Tigeot /* 29e555d299SFrançois Tigeot * This file implements HW context support. On gen5+ a HW context consists of an 30e555d299SFrançois Tigeot * opaque GPU object which is referenced at times of context saves and restores. 31e555d299SFrançois Tigeot * With RC6 enabled, the context is also referenced as the GPU enters and exists 32e555d299SFrançois Tigeot * from RC6 (GPU has it's own internal power context, except on gen5). Though 33e555d299SFrançois Tigeot * something like a context does exist for the media ring, the code only 34e555d299SFrançois Tigeot * supports contexts for the render ring. 35e555d299SFrançois Tigeot * 36e555d299SFrançois Tigeot * In software, there is a distinction between contexts created by the user, 37e555d299SFrançois Tigeot * and the default HW context. The default HW context is used by GPU clients 38e555d299SFrançois Tigeot * that do not request setup of their own hardware context. The default 39e555d299SFrançois Tigeot * context's state is never restored to help prevent programming errors. This 40e555d299SFrançois Tigeot * would happen if a client ran and piggy-backed off another clients GPU state. 41e555d299SFrançois Tigeot * The default context only exists to give the GPU some offset to load as the 42e555d299SFrançois Tigeot * current to invoke a save of the context we actually care about. In fact, the 43e555d299SFrançois Tigeot * code could likely be constructed, albeit in a more complicated fashion, to 44e555d299SFrançois Tigeot * never use the default context, though that limits the driver's ability to 45e555d299SFrançois Tigeot * swap out, and/or destroy other contexts. 46e555d299SFrançois Tigeot * 47e555d299SFrançois Tigeot * All other contexts are created as a request by the GPU client. These contexts 48e555d299SFrançois Tigeot * store GPU state, and thus allow GPU clients to not re-emit state (and 49e555d299SFrançois Tigeot * potentially query certain state) at any time. The kernel driver makes 50e555d299SFrançois Tigeot * certain that the appropriate commands are inserted. 51e555d299SFrançois Tigeot * 52e555d299SFrançois Tigeot * The context life cycle is semi-complicated in that context BOs may live 53e555d299SFrançois Tigeot * longer than the context itself because of the way the hardware, and object 54e555d299SFrançois Tigeot * tracking works. Below is a very crude representation of the state machine 55e555d299SFrançois Tigeot * describing the context life. 56e555d299SFrançois Tigeot * refcount pincount active 57e555d299SFrançois Tigeot * S0: initial state 0 0 0 58e555d299SFrançois Tigeot * S1: context created 1 0 0 59e555d299SFrançois Tigeot * S2: context is currently running 2 1 X 60e555d299SFrançois Tigeot * S3: GPU referenced, but not current 2 0 1 61e555d299SFrançois Tigeot * S4: context is current, but destroyed 1 1 0 62e555d299SFrançois Tigeot * S5: like S3, but destroyed 1 0 1 63e555d299SFrançois Tigeot * 64e555d299SFrançois Tigeot * The most common (but not all) transitions: 65e555d299SFrançois Tigeot * S0->S1: client creates a context 66e555d299SFrançois Tigeot * S1->S2: client submits execbuf with context 67e555d299SFrançois Tigeot * S2->S3: other clients submits execbuf with context 68e555d299SFrançois Tigeot * S3->S1: context object was retired 69e555d299SFrançois Tigeot * S3->S2: clients submits another execbuf 70e555d299SFrançois Tigeot * S2->S4: context destroy called with current context 71e555d299SFrançois Tigeot * S3->S5->S0: destroy path 72e555d299SFrançois Tigeot * S4->S5->S0: destroy path on current context 73e555d299SFrançois Tigeot * 74e555d299SFrançois Tigeot * There are two confusing terms used above: 75e555d299SFrançois Tigeot * The "current context" means the context which is currently running on the 769edbd4a0SFrançois Tigeot * GPU. The GPU has loaded its state already and has stored away the gtt 77e555d299SFrançois Tigeot * offset of the BO. The GPU is not actively referencing the data at this 78e555d299SFrançois Tigeot * offset, but it will on the next context switch. The only way to avoid this 79e555d299SFrançois Tigeot * is to do a GPU reset. 80e555d299SFrançois Tigeot * 81e555d299SFrançois Tigeot * An "active context' is one which was previously the "current context" and is 82e555d299SFrançois Tigeot * on the active list waiting for the next context switch to occur. Until this 83e555d299SFrançois Tigeot * happens, the object must remain at the same gtt offset. It is therefore 84e555d299SFrançois Tigeot * possible to destroy a context, but it is still active. 85e555d299SFrançois Tigeot * 86e555d299SFrançois Tigeot */ 87e555d299SFrançois Tigeot 88e555d299SFrançois Tigeot #include <drm/drmP.h> 89e555d299SFrançois Tigeot #include <drm/i915_drm.h> 90e555d299SFrançois Tigeot #include "i915_drv.h" 912c9916cdSFrançois Tigeot #include "i915_trace.h" 92e555d299SFrançois Tigeot 93e555d299SFrançois Tigeot /* This is a HW constraint. The value below is the largest known requirement 94e555d299SFrançois Tigeot * I've seen in a spec to date, and that was a workaround for a non-shipping 95e555d299SFrançois Tigeot * part. It should be safe to decrease this, but it's more future proof as is. 96e555d299SFrançois Tigeot */ 97ba55f2f5SFrançois Tigeot #define GEN6_CONTEXT_ALIGN (64<<10) 98ba55f2f5SFrançois Tigeot #define GEN7_CONTEXT_ALIGN 4096 99e555d299SFrançois Tigeot 100ba55f2f5SFrançois Tigeot static size_t get_context_alignment(struct drm_device *dev) 101ba55f2f5SFrançois Tigeot { 102ba55f2f5SFrançois Tigeot if (IS_GEN6(dev)) 103ba55f2f5SFrançois Tigeot return GEN6_CONTEXT_ALIGN; 104ba55f2f5SFrançois Tigeot 105ba55f2f5SFrançois Tigeot return GEN7_CONTEXT_ALIGN; 106ba55f2f5SFrançois Tigeot } 107e555d299SFrançois Tigeot 108e555d299SFrançois Tigeot static int get_context_size(struct drm_device *dev) 109e555d299SFrançois Tigeot { 110e555d299SFrançois Tigeot struct drm_i915_private *dev_priv = dev->dev_private; 111e555d299SFrançois Tigeot int ret; 112e555d299SFrançois Tigeot u32 reg; 113e555d299SFrançois Tigeot 114e555d299SFrançois Tigeot switch (INTEL_INFO(dev)->gen) { 115e555d299SFrançois Tigeot case 6: 116e555d299SFrançois Tigeot reg = I915_READ(CXT_SIZE); 117e555d299SFrançois Tigeot ret = GEN6_CXT_TOTAL_SIZE(reg) * 64; 118e555d299SFrançois Tigeot break; 119e555d299SFrançois Tigeot case 7: 120e555d299SFrançois Tigeot reg = I915_READ(GEN7_CXT_SIZE); 121e555d299SFrançois Tigeot if (IS_HASWELL(dev)) 1225d0b1887SFrançois Tigeot ret = HSW_CXT_TOTAL_SIZE; 123e555d299SFrançois Tigeot else 124e555d299SFrançois Tigeot ret = GEN7_CXT_TOTAL_SIZE(reg) * 64; 125e555d299SFrançois Tigeot break; 1269edbd4a0SFrançois Tigeot case 8: 1279edbd4a0SFrançois Tigeot ret = GEN8_CXT_TOTAL_SIZE; 1289edbd4a0SFrançois Tigeot break; 129e555d299SFrançois Tigeot default: 130e555d299SFrançois Tigeot BUG(); 131e555d299SFrançois Tigeot } 132e555d299SFrançois Tigeot 133e555d299SFrançois Tigeot return ret; 134e555d299SFrançois Tigeot } 135e555d299SFrançois Tigeot 1365d0b1887SFrançois Tigeot void i915_gem_context_free(struct kref *ctx_ref) 137e555d299SFrançois Tigeot { 138*19c468b4SFrançois Tigeot struct intel_context *ctx = container_of(ctx_ref, typeof(*ctx), ref); 139e555d299SFrançois Tigeot 1402c9916cdSFrançois Tigeot trace_i915_context_free(ctx); 1412c9916cdSFrançois Tigeot 1421b13d190SFrançois Tigeot if (i915.enable_execlists) 1431b13d190SFrançois Tigeot intel_lr_context_free(ctx); 144ba55f2f5SFrançois Tigeot 1451b13d190SFrançois Tigeot i915_ppgtt_put(ctx->ppgtt); 1461b13d190SFrançois Tigeot 14724edb884SFrançois Tigeot if (ctx->legacy_hw_ctx.rcs_state) 14824edb884SFrançois Tigeot drm_gem_object_unreference(&ctx->legacy_hw_ctx.rcs_state->base); 149ba55f2f5SFrançois Tigeot list_del(&ctx->link); 150158486a6SFrançois Tigeot kfree(ctx); 151e555d299SFrançois Tigeot } 152e555d299SFrançois Tigeot 1531b13d190SFrançois Tigeot struct drm_i915_gem_object * 15424edb884SFrançois Tigeot i915_gem_alloc_context_obj(struct drm_device *dev, size_t size) 15524edb884SFrançois Tigeot { 15624edb884SFrançois Tigeot struct drm_i915_gem_object *obj; 15724edb884SFrançois Tigeot int ret; 15824edb884SFrançois Tigeot 15924edb884SFrançois Tigeot obj = i915_gem_alloc_object(dev, size); 16024edb884SFrançois Tigeot if (obj == NULL) 16124edb884SFrançois Tigeot return ERR_PTR(-ENOMEM); 16224edb884SFrançois Tigeot 16324edb884SFrançois Tigeot /* 16424edb884SFrançois Tigeot * Try to make the context utilize L3 as well as LLC. 16524edb884SFrançois Tigeot * 16624edb884SFrançois Tigeot * On VLV we don't have L3 controls in the PTEs so we 16724edb884SFrançois Tigeot * shouldn't touch the cache level, especially as that 16824edb884SFrançois Tigeot * would make the object snooped which might have a 16924edb884SFrançois Tigeot * negative performance impact. 17024edb884SFrançois Tigeot */ 17124edb884SFrançois Tigeot if (INTEL_INFO(dev)->gen >= 7 && !IS_VALLEYVIEW(dev)) { 17224edb884SFrançois Tigeot ret = i915_gem_object_set_cache_level(obj, I915_CACHE_L3_LLC); 17324edb884SFrançois Tigeot /* Failure shouldn't ever happen this early */ 17424edb884SFrançois Tigeot if (WARN_ON(ret)) { 17524edb884SFrançois Tigeot drm_gem_object_unreference(&obj->base); 17624edb884SFrançois Tigeot return ERR_PTR(ret); 17724edb884SFrançois Tigeot } 17824edb884SFrançois Tigeot } 17924edb884SFrançois Tigeot 18024edb884SFrançois Tigeot return obj; 18124edb884SFrançois Tigeot } 18224edb884SFrançois Tigeot 183ba55f2f5SFrançois Tigeot static struct intel_context * 184ba55f2f5SFrançois Tigeot __create_hw_context(struct drm_device *dev, 185e555d299SFrançois Tigeot struct drm_i915_file_private *file_priv) 186e555d299SFrançois Tigeot { 187e555d299SFrançois Tigeot struct drm_i915_private *dev_priv = dev->dev_private; 188ba55f2f5SFrançois Tigeot struct intel_context *ctx; 1899f0f5970SFrançois Tigeot int ret; 190e555d299SFrançois Tigeot 191159fc1d7SFrançois Tigeot ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); 192e555d299SFrançois Tigeot if (ctx == NULL) 193e555d299SFrançois Tigeot return ERR_PTR(-ENOMEM); 194e555d299SFrançois Tigeot 1955d0b1887SFrançois Tigeot kref_init(&ctx->ref); 196ba55f2f5SFrançois Tigeot list_add_tail(&ctx->link, &dev_priv->context_list); 197*19c468b4SFrançois Tigeot ctx->i915 = dev_priv; 198ba55f2f5SFrançois Tigeot 199ba55f2f5SFrançois Tigeot if (dev_priv->hw_context_size) { 20024edb884SFrançois Tigeot struct drm_i915_gem_object *obj = 20124edb884SFrançois Tigeot i915_gem_alloc_context_obj(dev, dev_priv->hw_context_size); 20224edb884SFrançois Tigeot if (IS_ERR(obj)) { 20324edb884SFrançois Tigeot ret = PTR_ERR(obj); 204ba55f2f5SFrançois Tigeot goto err_out; 205e555d299SFrançois Tigeot } 20624edb884SFrançois Tigeot ctx->legacy_hw_ctx.rcs_state = obj; 207ba55f2f5SFrançois Tigeot } 208e555d299SFrançois Tigeot 209e555d299SFrançois Tigeot /* Default context will never have a file_priv */ 210ba55f2f5SFrançois Tigeot if (file_priv != NULL) { 211ba55f2f5SFrançois Tigeot ret = idr_alloc(&file_priv->context_idr, ctx, 21224edb884SFrançois Tigeot DEFAULT_CONTEXT_HANDLE, 0, GFP_KERNEL); 2139f0f5970SFrançois Tigeot if (ret < 0) 214e555d299SFrançois Tigeot goto err_out; 215ba55f2f5SFrançois Tigeot } else 21624edb884SFrançois Tigeot ret = DEFAULT_CONTEXT_HANDLE; 2175d0b1887SFrançois Tigeot 2185d0b1887SFrançois Tigeot ctx->file_priv = file_priv; 21924edb884SFrançois Tigeot ctx->user_handle = ret; 2209edbd4a0SFrançois Tigeot /* NB: Mark all slices as needing a remap so that when the context first 2219edbd4a0SFrançois Tigeot * loads it will restore whatever remap state already exists. If there 2229edbd4a0SFrançois Tigeot * is no remap info, it will be a NOP. */ 2239edbd4a0SFrançois Tigeot ctx->remap_slice = (1 << NUM_L3_SLICES(dev)) - 1; 224e555d299SFrançois Tigeot 2252c9916cdSFrançois Tigeot ctx->hang_stats.ban_period_seconds = DRM_I915_CTX_BAN_PERIOD; 2262c9916cdSFrançois Tigeot 227e555d299SFrançois Tigeot return ctx; 228e555d299SFrançois Tigeot 229e555d299SFrançois Tigeot err_out: 2305d0b1887SFrançois Tigeot i915_gem_context_unreference(ctx); 231e555d299SFrançois Tigeot return ERR_PTR(ret); 232e555d299SFrançois Tigeot } 233e555d299SFrançois Tigeot 234e555d299SFrançois Tigeot /** 235e555d299SFrançois Tigeot * The default context needs to exist per ring that uses contexts. It stores the 236e555d299SFrançois Tigeot * context state of the GPU for applications that don't utilize HW contexts, as 237e555d299SFrançois Tigeot * well as an idle case. 238e555d299SFrançois Tigeot */ 239ba55f2f5SFrançois Tigeot static struct intel_context * 240ba55f2f5SFrançois Tigeot i915_gem_create_context(struct drm_device *dev, 2411b13d190SFrançois Tigeot struct drm_i915_file_private *file_priv) 242e555d299SFrançois Tigeot { 243ba55f2f5SFrançois Tigeot const bool is_global_default_ctx = file_priv == NULL; 244ba55f2f5SFrançois Tigeot struct intel_context *ctx; 245ba55f2f5SFrançois Tigeot int ret = 0; 246e555d299SFrançois Tigeot 247ba55f2f5SFrançois Tigeot BUG_ON(!mutex_is_locked(&dev->struct_mutex)); 248e555d299SFrançois Tigeot 249ba55f2f5SFrançois Tigeot ctx = __create_hw_context(dev, file_priv); 250e555d299SFrançois Tigeot if (IS_ERR(ctx)) 251ba55f2f5SFrançois Tigeot return ctx; 252e555d299SFrançois Tigeot 25324edb884SFrançois Tigeot if (is_global_default_ctx && ctx->legacy_hw_ctx.rcs_state) { 254ba55f2f5SFrançois Tigeot /* We may need to do things with the shrinker which 255ba55f2f5SFrançois Tigeot * require us to immediately switch back to the default 256ba55f2f5SFrançois Tigeot * context. This can cause a problem as pinning the 257ba55f2f5SFrançois Tigeot * default context also requires GTT space which may not 258ba55f2f5SFrançois Tigeot * be available. To avoid this we always pin the default 259ba55f2f5SFrançois Tigeot * context. 260e555d299SFrançois Tigeot */ 26124edb884SFrançois Tigeot ret = i915_gem_obj_ggtt_pin(ctx->legacy_hw_ctx.rcs_state, 262ba55f2f5SFrançois Tigeot get_context_alignment(dev), 0); 2635d0b1887SFrançois Tigeot if (ret) { 2645d0b1887SFrançois Tigeot DRM_DEBUG_DRIVER("Couldn't pin %d\n", ret); 265e555d299SFrançois Tigeot goto err_destroy; 2665d0b1887SFrançois Tigeot } 267ba55f2f5SFrançois Tigeot } 268e555d299SFrançois Tigeot 2691b13d190SFrançois Tigeot if (USES_FULL_PPGTT(dev)) { 2701b13d190SFrançois Tigeot struct i915_hw_ppgtt *ppgtt = i915_ppgtt_create(dev, file_priv); 271ba55f2f5SFrançois Tigeot 272ba55f2f5SFrançois Tigeot if (IS_ERR_OR_NULL(ppgtt)) { 273ba55f2f5SFrançois Tigeot DRM_DEBUG_DRIVER("PPGTT setup failed (%ld)\n", 274ba55f2f5SFrançois Tigeot PTR_ERR(ppgtt)); 275ba55f2f5SFrançois Tigeot ret = PTR_ERR(ppgtt); 276ba55f2f5SFrançois Tigeot goto err_unpin; 2775d0b1887SFrançois Tigeot } 278e555d299SFrançois Tigeot 2791b13d190SFrançois Tigeot ctx->ppgtt = ppgtt; 280ba55f2f5SFrançois Tigeot } 2819edbd4a0SFrançois Tigeot 2822c9916cdSFrançois Tigeot trace_i915_context_create(ctx); 2832c9916cdSFrançois Tigeot 284ba55f2f5SFrançois Tigeot return ctx; 285e555d299SFrançois Tigeot 286e555d299SFrançois Tigeot err_unpin: 28724edb884SFrançois Tigeot if (is_global_default_ctx && ctx->legacy_hw_ctx.rcs_state) 28824edb884SFrançois Tigeot i915_gem_object_ggtt_unpin(ctx->legacy_hw_ctx.rcs_state); 289e555d299SFrançois Tigeot err_destroy: 2905d0b1887SFrançois Tigeot i915_gem_context_unreference(ctx); 291ba55f2f5SFrançois Tigeot return ERR_PTR(ret); 292ba55f2f5SFrançois Tigeot } 293ba55f2f5SFrançois Tigeot 294ba55f2f5SFrançois Tigeot void i915_gem_context_reset(struct drm_device *dev) 295ba55f2f5SFrançois Tigeot { 296ba55f2f5SFrançois Tigeot struct drm_i915_private *dev_priv = dev->dev_private; 297ba55f2f5SFrançois Tigeot int i; 298ba55f2f5SFrançois Tigeot 299477eb7f9SFrançois Tigeot if (i915.enable_execlists) { 300477eb7f9SFrançois Tigeot struct intel_context *ctx; 301477eb7f9SFrançois Tigeot 302477eb7f9SFrançois Tigeot list_for_each_entry(ctx, &dev_priv->context_list, link) { 303477eb7f9SFrançois Tigeot intel_lr_context_reset(dev, ctx); 304477eb7f9SFrançois Tigeot } 305477eb7f9SFrançois Tigeot 3061b13d190SFrançois Tigeot return; 307477eb7f9SFrançois Tigeot } 3081b13d190SFrançois Tigeot 309ba55f2f5SFrançois Tigeot for (i = 0; i < I915_NUM_RINGS; i++) { 310ba55f2f5SFrançois Tigeot struct intel_engine_cs *ring = &dev_priv->ring[i]; 31124edb884SFrançois Tigeot struct intel_context *lctx = ring->last_context; 312ba55f2f5SFrançois Tigeot 3131b13d190SFrançois Tigeot if (lctx) { 31424edb884SFrançois Tigeot if (lctx->legacy_hw_ctx.rcs_state && i == RCS) 31524edb884SFrançois Tigeot i915_gem_object_ggtt_unpin(lctx->legacy_hw_ctx.rcs_state); 31624edb884SFrançois Tigeot 31724edb884SFrançois Tigeot i915_gem_context_unreference(lctx); 3181b13d190SFrançois Tigeot ring->last_context = NULL; 3191b13d190SFrançois Tigeot } 320ba55f2f5SFrançois Tigeot } 321e555d299SFrançois Tigeot } 322e555d299SFrançois Tigeot 3239edbd4a0SFrançois Tigeot int i915_gem_context_init(struct drm_device *dev) 324e555d299SFrançois Tigeot { 325e555d299SFrançois Tigeot struct drm_i915_private *dev_priv = dev->dev_private; 326ba55f2f5SFrançois Tigeot struct intel_context *ctx; 327ba55f2f5SFrançois Tigeot int i; 328e555d299SFrançois Tigeot 329ba55f2f5SFrançois Tigeot /* Init should only be called once per module load. Eventually the 330ba55f2f5SFrançois Tigeot * restriction on the context_disabled check can be loosened. */ 331ba55f2f5SFrançois Tigeot if (WARN_ON(dev_priv->ring[RCS].default_context)) 3329edbd4a0SFrançois Tigeot return 0; 333e555d299SFrançois Tigeot 3341b13d190SFrançois Tigeot if (i915.enable_execlists) { 3351b13d190SFrançois Tigeot /* NB: intentionally left blank. We will allocate our own 3361b13d190SFrançois Tigeot * backing objects as we need them, thank you very much */ 3371b13d190SFrançois Tigeot dev_priv->hw_context_size = 0; 3381b13d190SFrançois Tigeot } else if (HAS_HW_CONTEXTS(dev)) { 339a2fdbec6SFrançois Tigeot dev_priv->hw_context_size = round_up(get_context_size(dev), 4096); 340a2fdbec6SFrançois Tigeot if (dev_priv->hw_context_size > (1<<20)) { 341ba55f2f5SFrançois Tigeot DRM_DEBUG_DRIVER("Disabling HW Contexts; invalid size %d\n", 342ba55f2f5SFrançois Tigeot dev_priv->hw_context_size); 343ba55f2f5SFrançois Tigeot dev_priv->hw_context_size = 0; 344ba55f2f5SFrançois Tigeot } 345e555d299SFrançois Tigeot } 346e555d299SFrançois Tigeot 3471b13d190SFrançois Tigeot ctx = i915_gem_create_context(dev, NULL); 348ba55f2f5SFrançois Tigeot if (IS_ERR(ctx)) { 349ba55f2f5SFrançois Tigeot DRM_ERROR("Failed to create default global context (error %ld)\n", 350ba55f2f5SFrançois Tigeot PTR_ERR(ctx)); 351ba55f2f5SFrançois Tigeot return PTR_ERR(ctx); 352e555d299SFrançois Tigeot } 353e555d299SFrançois Tigeot 3541b13d190SFrançois Tigeot for (i = 0; i < I915_NUM_RINGS; i++) { 3551b13d190SFrançois Tigeot struct intel_engine_cs *ring = &dev_priv->ring[i]; 356ba55f2f5SFrançois Tigeot 3571b13d190SFrançois Tigeot /* NB: RCS will hold a ref for all rings */ 3581b13d190SFrançois Tigeot ring->default_context = ctx; 3591b13d190SFrançois Tigeot } 3601b13d190SFrançois Tigeot 3611b13d190SFrançois Tigeot DRM_DEBUG_DRIVER("%s context support initialized\n", 3621b13d190SFrançois Tigeot i915.enable_execlists ? "LR" : 3631b13d190SFrançois Tigeot dev_priv->hw_context_size ? "HW" : "fake"); 3649edbd4a0SFrançois Tigeot return 0; 365e555d299SFrançois Tigeot } 366e555d299SFrançois Tigeot 367e555d299SFrançois Tigeot void i915_gem_context_fini(struct drm_device *dev) 368e555d299SFrançois Tigeot { 369e555d299SFrançois Tigeot struct drm_i915_private *dev_priv = dev->dev_private; 370ba55f2f5SFrançois Tigeot struct intel_context *dctx = dev_priv->ring[RCS].default_context; 371ba55f2f5SFrançois Tigeot int i; 372e555d299SFrançois Tigeot 37324edb884SFrançois Tigeot if (dctx->legacy_hw_ctx.rcs_state) { 374e555d299SFrançois Tigeot /* The only known way to stop the gpu from accessing the hw context is 375e555d299SFrançois Tigeot * to reset it. Do this as the very last operation to avoid confusing 376e555d299SFrançois Tigeot * other code, leading to spurious errors. */ 377e555d299SFrançois Tigeot intel_gpu_reset(dev); 378e555d299SFrançois Tigeot 3795d0b1887SFrançois Tigeot /* When default context is created and switched to, base object refcount 3805d0b1887SFrançois Tigeot * will be 2 (+1 from object creation and +1 from do_switch()). 3815d0b1887SFrançois Tigeot * i915_gem_context_fini() will be called after gpu_idle() has switched 3825d0b1887SFrançois Tigeot * to default context. So we need to unreference the base object once 3835d0b1887SFrançois Tigeot * to offset the do_switch part, so that i915_gem_context_unreference() 3845d0b1887SFrançois Tigeot * can then free the base object correctly. */ 3859edbd4a0SFrançois Tigeot WARN_ON(!dev_priv->ring[RCS].last_context); 3869edbd4a0SFrançois Tigeot if (dev_priv->ring[RCS].last_context == dctx) { 3879edbd4a0SFrançois Tigeot /* Fake switch to NULL context */ 38824edb884SFrançois Tigeot WARN_ON(dctx->legacy_hw_ctx.rcs_state->active); 38924edb884SFrançois Tigeot i915_gem_object_ggtt_unpin(dctx->legacy_hw_ctx.rcs_state); 390ba55f2f5SFrançois Tigeot i915_gem_context_unreference(dctx); 391ba55f2f5SFrançois Tigeot dev_priv->ring[RCS].last_context = NULL; 392ba55f2f5SFrançois Tigeot } 393ba55f2f5SFrançois Tigeot 39424edb884SFrançois Tigeot i915_gem_object_ggtt_unpin(dctx->legacy_hw_ctx.rcs_state); 395ba55f2f5SFrançois Tigeot } 396ba55f2f5SFrançois Tigeot 397ba55f2f5SFrançois Tigeot for (i = 0; i < I915_NUM_RINGS; i++) { 398ba55f2f5SFrançois Tigeot struct intel_engine_cs *ring = &dev_priv->ring[i]; 399ba55f2f5SFrançois Tigeot 400ba55f2f5SFrançois Tigeot if (ring->last_context) 401ba55f2f5SFrançois Tigeot i915_gem_context_unreference(ring->last_context); 402ba55f2f5SFrançois Tigeot 403ba55f2f5SFrançois Tigeot ring->default_context = NULL; 404ba55f2f5SFrançois Tigeot ring->last_context = NULL; 405ba55f2f5SFrançois Tigeot } 406ba55f2f5SFrançois Tigeot 4075d0b1887SFrançois Tigeot i915_gem_context_unreference(dctx); 408e555d299SFrançois Tigeot } 409e555d299SFrançois Tigeot 410ba55f2f5SFrançois Tigeot int i915_gem_context_enable(struct drm_i915_private *dev_priv) 411ba55f2f5SFrançois Tigeot { 412ba55f2f5SFrançois Tigeot struct intel_engine_cs *ring; 413ba55f2f5SFrançois Tigeot int ret, i; 414ba55f2f5SFrançois Tigeot 415ba55f2f5SFrançois Tigeot BUG_ON(!dev_priv->ring[RCS].default_context); 416ba55f2f5SFrançois Tigeot 4172c9916cdSFrançois Tigeot if (i915.enable_execlists) { 4182c9916cdSFrançois Tigeot for_each_ring(ring, dev_priv, i) { 4192c9916cdSFrançois Tigeot if (ring->init_context) { 4202c9916cdSFrançois Tigeot ret = ring->init_context(ring, 4212c9916cdSFrançois Tigeot ring->default_context); 4222c9916cdSFrançois Tigeot if (ret) { 4232c9916cdSFrançois Tigeot DRM_ERROR("ring init context: %d\n", 4242c9916cdSFrançois Tigeot ret); 4252c9916cdSFrançois Tigeot return ret; 4262c9916cdSFrançois Tigeot } 4272c9916cdSFrançois Tigeot } 4282c9916cdSFrançois Tigeot } 4291b13d190SFrançois Tigeot 4302c9916cdSFrançois Tigeot } else 431ba55f2f5SFrançois Tigeot for_each_ring(ring, dev_priv, i) { 432ba55f2f5SFrançois Tigeot ret = i915_switch_context(ring, ring->default_context); 433ba55f2f5SFrançois Tigeot if (ret) 434ba55f2f5SFrançois Tigeot return ret; 435ba55f2f5SFrançois Tigeot } 436ba55f2f5SFrançois Tigeot 437ba55f2f5SFrançois Tigeot return 0; 4389edbd4a0SFrançois Tigeot } 4399edbd4a0SFrançois Tigeot 440e555d299SFrançois Tigeot static int context_idr_cleanup(int id, void *p, void *data) 441e555d299SFrançois Tigeot { 442ba55f2f5SFrançois Tigeot struct intel_context *ctx = p; 443e555d299SFrançois Tigeot 4445d0b1887SFrançois Tigeot i915_gem_context_unreference(ctx); 445e555d299SFrançois Tigeot return 0; 446e555d299SFrançois Tigeot } 447e555d299SFrançois Tigeot 448ba55f2f5SFrançois Tigeot int i915_gem_context_open(struct drm_device *dev, struct drm_file *file) 4495d0b1887SFrançois Tigeot { 4505d0b1887SFrançois Tigeot struct drm_i915_file_private *file_priv = file->driver_priv; 451ba55f2f5SFrançois Tigeot struct intel_context *ctx; 4525d0b1887SFrançois Tigeot 453ba55f2f5SFrançois Tigeot idr_init(&file_priv->context_idr); 4545d0b1887SFrançois Tigeot 455ba55f2f5SFrançois Tigeot mutex_lock(&dev->struct_mutex); 4561b13d190SFrançois Tigeot ctx = i915_gem_create_context(dev, file_priv); 457ba55f2f5SFrançois Tigeot mutex_unlock(&dev->struct_mutex); 4585d0b1887SFrançois Tigeot 459ba55f2f5SFrançois Tigeot if (IS_ERR(ctx)) { 460ba55f2f5SFrançois Tigeot idr_destroy(&file_priv->context_idr); 461ba55f2f5SFrançois Tigeot return PTR_ERR(ctx); 462ba55f2f5SFrançois Tigeot } 4639edbd4a0SFrançois Tigeot 464ba55f2f5SFrançois Tigeot return 0; 4655d0b1887SFrançois Tigeot } 4665d0b1887SFrançois Tigeot 467e555d299SFrançois Tigeot void i915_gem_context_close(struct drm_device *dev, struct drm_file *file) 468e555d299SFrançois Tigeot { 469e555d299SFrançois Tigeot struct drm_i915_file_private *file_priv = file->driver_priv; 470e555d299SFrançois Tigeot 471e555d299SFrançois Tigeot idr_for_each(&file_priv->context_idr, context_idr_cleanup, NULL); 472e555d299SFrançois Tigeot idr_destroy(&file_priv->context_idr); 473e555d299SFrançois Tigeot } 474e555d299SFrançois Tigeot 475ba55f2f5SFrançois Tigeot struct intel_context * 476e555d299SFrançois Tigeot i915_gem_context_get(struct drm_i915_file_private *file_priv, u32 id) 477e555d299SFrançois Tigeot { 478ba55f2f5SFrançois Tigeot struct intel_context *ctx; 479ba55f2f5SFrançois Tigeot 480ba55f2f5SFrançois Tigeot ctx = (struct intel_context *)idr_find(&file_priv->context_idr, id); 481ba55f2f5SFrançois Tigeot if (!ctx) 482ba55f2f5SFrançois Tigeot return ERR_PTR(-ENOENT); 483ba55f2f5SFrançois Tigeot 484ba55f2f5SFrançois Tigeot return ctx; 485e555d299SFrançois Tigeot } 486e555d299SFrançois Tigeot 487e555d299SFrançois Tigeot static inline int 488ba55f2f5SFrançois Tigeot mi_set_context(struct intel_engine_cs *ring, 489ba55f2f5SFrançois Tigeot struct intel_context *new_context, 490e555d299SFrançois Tigeot u32 hw_flags) 491e555d299SFrançois Tigeot { 4921b13d190SFrançois Tigeot u32 flags = hw_flags | MI_MM_SPACE_GTT; 4930dbf0ea8SMatthew Dillon const int num_rings = 4940dbf0ea8SMatthew Dillon /* Use an extended w/a on ivb+ if signalling from other rings */ 4950dbf0ea8SMatthew Dillon i915_semaphore_is_enabled(ring->dev) ? 4960dbf0ea8SMatthew Dillon hweight32(INTEL_INFO(ring->dev)->ring_mask) - 1 : 4970dbf0ea8SMatthew Dillon 0; 4980dbf0ea8SMatthew Dillon int len, i, ret; 499e555d299SFrançois Tigeot 500e555d299SFrançois Tigeot /* w/a: If Flush TLB Invalidation Mode is enabled, driver must do a TLB 501e555d299SFrançois Tigeot * invalidation prior to MI_SET_CONTEXT. On GEN6 we don't set the value 502e555d299SFrançois Tigeot * explicitly, so we rely on the value at ring init, stored in 503e555d299SFrançois Tigeot * itlb_before_ctx_switch. 504e555d299SFrançois Tigeot */ 505ba55f2f5SFrançois Tigeot if (IS_GEN6(ring->dev)) { 506e555d299SFrançois Tigeot ret = ring->flush(ring, I915_GEM_GPU_DOMAINS, 0); 507e555d299SFrançois Tigeot if (ret) 508e555d299SFrançois Tigeot return ret; 509e555d299SFrançois Tigeot } 510e555d299SFrançois Tigeot 5111b13d190SFrançois Tigeot /* These flags are for resource streamer on HSW+ */ 5121b13d190SFrançois Tigeot if (!IS_HASWELL(ring->dev) && INTEL_INFO(ring->dev)->gen < 8) 5131b13d190SFrançois Tigeot flags |= (MI_SAVE_EXT_STATE_EN | MI_RESTORE_EXT_STATE_EN); 5140dbf0ea8SMatthew Dillon 5152c9916cdSFrançois Tigeot 5160dbf0ea8SMatthew Dillon len = 4; 5170dbf0ea8SMatthew Dillon if (INTEL_INFO(ring->dev)->gen >= 7) 5180dbf0ea8SMatthew Dillon len += 2 + (num_rings ? 4*num_rings + 2 : 0); 5190dbf0ea8SMatthew Dillon 5200dbf0ea8SMatthew Dillon ret = intel_ring_begin(ring, len); 521e555d299SFrançois Tigeot if (ret) 522e555d299SFrançois Tigeot return ret; 523e555d299SFrançois Tigeot 524ba55f2f5SFrançois Tigeot /* WaProgramMiArbOnOffAroundMiSetContext:ivb,vlv,hsw,bdw,chv */ 5250dbf0ea8SMatthew Dillon if (INTEL_INFO(ring->dev)->gen >= 7) { 526e555d299SFrançois Tigeot intel_ring_emit(ring, MI_ARB_ON_OFF | MI_ARB_DISABLE); 5270dbf0ea8SMatthew Dillon if (num_rings) { 5280dbf0ea8SMatthew Dillon struct intel_engine_cs *signaller; 5290dbf0ea8SMatthew Dillon 5300dbf0ea8SMatthew Dillon intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(num_rings)); 5310dbf0ea8SMatthew Dillon for_each_ring(signaller, to_i915(ring->dev), i) { 5320dbf0ea8SMatthew Dillon if (signaller == ring) 5330dbf0ea8SMatthew Dillon continue; 5340dbf0ea8SMatthew Dillon 5350dbf0ea8SMatthew Dillon intel_ring_emit(ring, RING_PSMI_CTL(signaller->mmio_base)); 5360dbf0ea8SMatthew Dillon intel_ring_emit(ring, _MASKED_BIT_ENABLE(GEN6_PSMI_SLEEP_MSG_DISABLE)); 5370dbf0ea8SMatthew Dillon } 5380dbf0ea8SMatthew Dillon } 5390dbf0ea8SMatthew Dillon } 540e555d299SFrançois Tigeot 541e555d299SFrançois Tigeot intel_ring_emit(ring, MI_NOOP); 542e555d299SFrançois Tigeot intel_ring_emit(ring, MI_SET_CONTEXT); 54324edb884SFrançois Tigeot intel_ring_emit(ring, i915_gem_obj_ggtt_offset(new_context->legacy_hw_ctx.rcs_state) | 5441b13d190SFrançois Tigeot flags); 545ba55f2f5SFrançois Tigeot /* 546ba55f2f5SFrançois Tigeot * w/a: MI_SET_CONTEXT must always be followed by MI_NOOP 547ba55f2f5SFrançois Tigeot * WaMiSetContext_Hang:snb,ivb,vlv 548ba55f2f5SFrançois Tigeot */ 549e555d299SFrançois Tigeot intel_ring_emit(ring, MI_NOOP); 550e555d299SFrançois Tigeot 5510dbf0ea8SMatthew Dillon if (INTEL_INFO(ring->dev)->gen >= 7) { 5520dbf0ea8SMatthew Dillon if (num_rings) { 5530dbf0ea8SMatthew Dillon struct intel_engine_cs *signaller; 5540dbf0ea8SMatthew Dillon 5550dbf0ea8SMatthew Dillon intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(num_rings)); 5560dbf0ea8SMatthew Dillon for_each_ring(signaller, to_i915(ring->dev), i) { 5570dbf0ea8SMatthew Dillon if (signaller == ring) 5580dbf0ea8SMatthew Dillon continue; 5590dbf0ea8SMatthew Dillon 5600dbf0ea8SMatthew Dillon intel_ring_emit(ring, RING_PSMI_CTL(signaller->mmio_base)); 5610dbf0ea8SMatthew Dillon intel_ring_emit(ring, _MASKED_BIT_DISABLE(GEN6_PSMI_SLEEP_MSG_DISABLE)); 5620dbf0ea8SMatthew Dillon } 5630dbf0ea8SMatthew Dillon } 564e555d299SFrançois Tigeot intel_ring_emit(ring, MI_ARB_ON_OFF | MI_ARB_ENABLE); 5650dbf0ea8SMatthew Dillon } 566e555d299SFrançois Tigeot 567e555d299SFrançois Tigeot intel_ring_advance(ring); 568e555d299SFrançois Tigeot 569e555d299SFrançois Tigeot return ret; 570e555d299SFrançois Tigeot } 571e555d299SFrançois Tigeot 572477eb7f9SFrançois Tigeot static inline bool should_skip_switch(struct intel_engine_cs *ring, 573477eb7f9SFrançois Tigeot struct intel_context *from, 574477eb7f9SFrançois Tigeot struct intel_context *to) 575477eb7f9SFrançois Tigeot { 576477eb7f9SFrançois Tigeot if (to->remap_slice) 577477eb7f9SFrançois Tigeot return false; 578477eb7f9SFrançois Tigeot 579*19c468b4SFrançois Tigeot if (to->ppgtt && from == to && 580*19c468b4SFrançois Tigeot !(intel_ring_flag(ring) & to->ppgtt->pd_dirty_rings)) 581477eb7f9SFrançois Tigeot return true; 582477eb7f9SFrançois Tigeot 583477eb7f9SFrançois Tigeot return false; 584477eb7f9SFrançois Tigeot } 585477eb7f9SFrançois Tigeot 586477eb7f9SFrançois Tigeot static bool 587477eb7f9SFrançois Tigeot needs_pd_load_pre(struct intel_engine_cs *ring, struct intel_context *to) 588477eb7f9SFrançois Tigeot { 589477eb7f9SFrançois Tigeot struct drm_i915_private *dev_priv = ring->dev->dev_private; 590477eb7f9SFrançois Tigeot 591477eb7f9SFrançois Tigeot if (!to->ppgtt) 592477eb7f9SFrançois Tigeot return false; 593477eb7f9SFrançois Tigeot 594477eb7f9SFrançois Tigeot if (INTEL_INFO(ring->dev)->gen < 8) 595477eb7f9SFrançois Tigeot return true; 596477eb7f9SFrançois Tigeot 597477eb7f9SFrançois Tigeot if (ring != &dev_priv->ring[RCS]) 598477eb7f9SFrançois Tigeot return true; 599477eb7f9SFrançois Tigeot 600477eb7f9SFrançois Tigeot return false; 601477eb7f9SFrançois Tigeot } 602477eb7f9SFrançois Tigeot 603477eb7f9SFrançois Tigeot static bool 604477eb7f9SFrançois Tigeot needs_pd_load_post(struct intel_engine_cs *ring, struct intel_context *to, 605477eb7f9SFrançois Tigeot u32 hw_flags) 606477eb7f9SFrançois Tigeot { 607477eb7f9SFrançois Tigeot struct drm_i915_private *dev_priv = ring->dev->dev_private; 608477eb7f9SFrançois Tigeot 609477eb7f9SFrançois Tigeot if (!to->ppgtt) 610477eb7f9SFrançois Tigeot return false; 611477eb7f9SFrançois Tigeot 612477eb7f9SFrançois Tigeot if (!IS_GEN8(ring->dev)) 613477eb7f9SFrançois Tigeot return false; 614477eb7f9SFrançois Tigeot 615477eb7f9SFrançois Tigeot if (ring != &dev_priv->ring[RCS]) 616477eb7f9SFrançois Tigeot return false; 617477eb7f9SFrançois Tigeot 618477eb7f9SFrançois Tigeot if (hw_flags & MI_RESTORE_INHIBIT) 619477eb7f9SFrançois Tigeot return true; 620477eb7f9SFrançois Tigeot 621477eb7f9SFrançois Tigeot return false; 622477eb7f9SFrançois Tigeot } 623477eb7f9SFrançois Tigeot 624ba55f2f5SFrançois Tigeot static int do_switch(struct intel_engine_cs *ring, 625ba55f2f5SFrançois Tigeot struct intel_context *to) 626e555d299SFrançois Tigeot { 627ba55f2f5SFrançois Tigeot struct drm_i915_private *dev_priv = ring->dev->dev_private; 628ba55f2f5SFrançois Tigeot struct intel_context *from = ring->last_context; 629e555d299SFrançois Tigeot u32 hw_flags = 0; 630ba55f2f5SFrançois Tigeot bool uninitialized = false; 6319edbd4a0SFrançois Tigeot int ret, i; 632e555d299SFrançois Tigeot 633ba55f2f5SFrançois Tigeot if (from != NULL && ring == &dev_priv->ring[RCS]) { 63424edb884SFrançois Tigeot BUG_ON(from->legacy_hw_ctx.rcs_state == NULL); 63524edb884SFrançois Tigeot BUG_ON(!i915_gem_obj_is_pinned(from->legacy_hw_ctx.rcs_state)); 636ba55f2f5SFrançois Tigeot } 637e555d299SFrançois Tigeot 638477eb7f9SFrançois Tigeot if (should_skip_switch(ring, from, to)) 639e555d299SFrançois Tigeot return 0; 640e555d299SFrançois Tigeot 641ba55f2f5SFrançois Tigeot /* Trying to pin first makes error handling easier. */ 642ba55f2f5SFrançois Tigeot if (ring == &dev_priv->ring[RCS]) { 64324edb884SFrançois Tigeot ret = i915_gem_obj_ggtt_pin(to->legacy_hw_ctx.rcs_state, 644ba55f2f5SFrançois Tigeot get_context_alignment(ring->dev), 0); 645e555d299SFrançois Tigeot if (ret) 646e555d299SFrançois Tigeot return ret; 647ba55f2f5SFrançois Tigeot } 648e555d299SFrançois Tigeot 6499edbd4a0SFrançois Tigeot /* 6509edbd4a0SFrançois Tigeot * Pin can switch back to the default context if we end up calling into 6519edbd4a0SFrançois Tigeot * evict_everything - as a last ditch gtt defrag effort that also 6529edbd4a0SFrançois Tigeot * switches to the default context. Hence we need to reload from here. 6539edbd4a0SFrançois Tigeot */ 6549edbd4a0SFrançois Tigeot from = ring->last_context; 6559edbd4a0SFrançois Tigeot 656477eb7f9SFrançois Tigeot if (needs_pd_load_pre(ring, to)) { 657477eb7f9SFrançois Tigeot /* Older GENs and non render rings still want the load first, 658477eb7f9SFrançois Tigeot * "PP_DCLV followed by PP_DIR_BASE register through Load 659477eb7f9SFrançois Tigeot * Register Immediate commands in Ring Buffer before submitting 660477eb7f9SFrançois Tigeot * a context."*/ 6612c9916cdSFrançois Tigeot trace_switch_mm(ring, to); 6621b13d190SFrançois Tigeot ret = to->ppgtt->switch_mm(to->ppgtt, ring); 663ba55f2f5SFrançois Tigeot if (ret) 664ba55f2f5SFrançois Tigeot goto unpin_out; 665477eb7f9SFrançois Tigeot 666477eb7f9SFrançois Tigeot /* Doing a PD load always reloads the page dirs */ 667*19c468b4SFrançois Tigeot to->ppgtt->pd_dirty_rings &= ~intel_ring_flag(ring); 668ba55f2f5SFrançois Tigeot } 669ba55f2f5SFrançois Tigeot 670ba55f2f5SFrançois Tigeot if (ring != &dev_priv->ring[RCS]) { 671ba55f2f5SFrançois Tigeot if (from) 672ba55f2f5SFrançois Tigeot i915_gem_context_unreference(from); 673ba55f2f5SFrançois Tigeot goto done; 674ba55f2f5SFrançois Tigeot } 675ba55f2f5SFrançois Tigeot 6769edbd4a0SFrançois Tigeot /* 6779edbd4a0SFrançois Tigeot * Clear this page out of any CPU caches for coherent swap-in/out. Note 678e555d299SFrançois Tigeot * that thanks to write = false in this call and us not setting any gpu 679e555d299SFrançois Tigeot * write domains when putting a context object onto the active list 680e555d299SFrançois Tigeot * (when switching away from it), this won't block. 6819edbd4a0SFrançois Tigeot * 6829edbd4a0SFrançois Tigeot * XXX: We need a real interface to do this instead of trickery. 6839edbd4a0SFrançois Tigeot */ 68424edb884SFrançois Tigeot ret = i915_gem_object_set_to_gtt_domain(to->legacy_hw_ctx.rcs_state, false); 685ba55f2f5SFrançois Tigeot if (ret) 686ba55f2f5SFrançois Tigeot goto unpin_out; 687ba55f2f5SFrançois Tigeot 688477eb7f9SFrançois Tigeot if (!to->legacy_hw_ctx.initialized) { 689e555d299SFrançois Tigeot hw_flags |= MI_RESTORE_INHIBIT; 690477eb7f9SFrançois Tigeot /* NB: If we inhibit the restore, the context is not allowed to 691477eb7f9SFrançois Tigeot * die because future work may end up depending on valid address 692477eb7f9SFrançois Tigeot * space. This means we must enforce that a page table load 693477eb7f9SFrançois Tigeot * occur when this occurs. */ 694477eb7f9SFrançois Tigeot } else if (to->ppgtt && 695*19c468b4SFrançois Tigeot (intel_ring_flag(ring) & to->ppgtt->pd_dirty_rings)) { 696477eb7f9SFrançois Tigeot hw_flags |= MI_FORCE_RESTORE; 697*19c468b4SFrançois Tigeot to->ppgtt->pd_dirty_rings &= ~intel_ring_flag(ring); 698*19c468b4SFrançois Tigeot } 699477eb7f9SFrançois Tigeot 700477eb7f9SFrançois Tigeot /* We should never emit switch_mm more than once */ 701477eb7f9SFrançois Tigeot WARN_ON(needs_pd_load_pre(ring, to) && 702477eb7f9SFrançois Tigeot needs_pd_load_post(ring, to, hw_flags)); 703e555d299SFrançois Tigeot 704e555d299SFrançois Tigeot ret = mi_set_context(ring, to, hw_flags); 705ba55f2f5SFrançois Tigeot if (ret) 706ba55f2f5SFrançois Tigeot goto unpin_out; 707e555d299SFrançois Tigeot 708477eb7f9SFrançois Tigeot /* GEN8 does *not* require an explicit reload if the PDPs have been 709477eb7f9SFrançois Tigeot * setup, and we do not wish to move them. 710477eb7f9SFrançois Tigeot */ 711477eb7f9SFrançois Tigeot if (needs_pd_load_post(ring, to, hw_flags)) { 712477eb7f9SFrançois Tigeot trace_switch_mm(ring, to); 713477eb7f9SFrançois Tigeot ret = to->ppgtt->switch_mm(to->ppgtt, ring); 714477eb7f9SFrançois Tigeot /* The hardware context switch is emitted, but we haven't 715477eb7f9SFrançois Tigeot * actually changed the state - so it's probably safe to bail 716477eb7f9SFrançois Tigeot * here. Still, let the user know something dangerous has 717477eb7f9SFrançois Tigeot * happened. 718477eb7f9SFrançois Tigeot */ 719477eb7f9SFrançois Tigeot if (ret) { 720477eb7f9SFrançois Tigeot DRM_ERROR("Failed to change address space on context switch\n"); 721477eb7f9SFrançois Tigeot goto unpin_out; 722477eb7f9SFrançois Tigeot } 723477eb7f9SFrançois Tigeot } 724477eb7f9SFrançois Tigeot 7259edbd4a0SFrançois Tigeot for (i = 0; i < MAX_L3_SLICES; i++) { 7269edbd4a0SFrançois Tigeot if (!(to->remap_slice & (1<<i))) 7279edbd4a0SFrançois Tigeot continue; 7289edbd4a0SFrançois Tigeot 7299edbd4a0SFrançois Tigeot ret = i915_gem_l3_remap(ring, i); 7309edbd4a0SFrançois Tigeot /* If it failed, try again next round */ 7319edbd4a0SFrançois Tigeot if (ret) 7329edbd4a0SFrançois Tigeot DRM_DEBUG_DRIVER("L3 remapping failed\n"); 7339edbd4a0SFrançois Tigeot else 7349edbd4a0SFrançois Tigeot to->remap_slice &= ~(1<<i); 7359edbd4a0SFrançois Tigeot } 7369edbd4a0SFrançois Tigeot 737e555d299SFrançois Tigeot /* The backing object for the context is done after switching to the 738e555d299SFrançois Tigeot * *next* context. Therefore we cannot retire the previous context until 739e555d299SFrançois Tigeot * the next context has already started running. In fact, the below code 740e555d299SFrançois Tigeot * is a bit suboptimal because the retiring can occur simply after the 741e555d299SFrançois Tigeot * MI_SET_CONTEXT instead of when the next seqno has completed. 742e555d299SFrançois Tigeot */ 7435d0b1887SFrançois Tigeot if (from != NULL) { 74424edb884SFrançois Tigeot from->legacy_hw_ctx.rcs_state->base.read_domains = I915_GEM_DOMAIN_INSTRUCTION; 74524edb884SFrançois Tigeot i915_vma_move_to_active(i915_gem_obj_to_ggtt(from->legacy_hw_ctx.rcs_state), ring); 746e555d299SFrançois Tigeot /* As long as MI_SET_CONTEXT is serializing, ie. it flushes the 747e555d299SFrançois Tigeot * whole damn pipeline, we don't need to explicitly mark the 748e555d299SFrançois Tigeot * object dirty. The only exception is that the context must be 749e555d299SFrançois Tigeot * correct in case the object gets swapped out. Ideally we'd be 750e555d299SFrançois Tigeot * able to defer doing this until we know the object would be 751e555d299SFrançois Tigeot * swapped, but there is no way to do that yet. 752e555d299SFrançois Tigeot */ 75324edb884SFrançois Tigeot from->legacy_hw_ctx.rcs_state->dirty = 1; 754e555d299SFrançois Tigeot 7559edbd4a0SFrançois Tigeot /* obj is kept alive until the next request by its active ref */ 75624edb884SFrançois Tigeot i915_gem_object_ggtt_unpin(from->legacy_hw_ctx.rcs_state); 7575d0b1887SFrançois Tigeot i915_gem_context_unreference(from); 7585d0b1887SFrançois Tigeot } 7595d0b1887SFrançois Tigeot 760477eb7f9SFrançois Tigeot uninitialized = !to->legacy_hw_ctx.initialized; 76124edb884SFrançois Tigeot to->legacy_hw_ctx.initialized = true; 762e555d299SFrançois Tigeot 763ba55f2f5SFrançois Tigeot done: 764ba55f2f5SFrançois Tigeot i915_gem_context_reference(to); 765ba55f2f5SFrançois Tigeot ring->last_context = to; 766ba55f2f5SFrançois Tigeot 767ba55f2f5SFrançois Tigeot if (uninitialized) { 7681b13d190SFrançois Tigeot if (ring->init_context) { 7692c9916cdSFrançois Tigeot ret = ring->init_context(ring, to); 7701b13d190SFrançois Tigeot if (ret) 7711b13d190SFrançois Tigeot DRM_ERROR("ring init context: %d\n", ret); 7721b13d190SFrançois Tigeot } 773ba55f2f5SFrançois Tigeot } 774ba55f2f5SFrançois Tigeot 775e555d299SFrançois Tigeot return 0; 776ba55f2f5SFrançois Tigeot 777ba55f2f5SFrançois Tigeot unpin_out: 778ba55f2f5SFrançois Tigeot if (ring->id == RCS) 77924edb884SFrançois Tigeot i915_gem_object_ggtt_unpin(to->legacy_hw_ctx.rcs_state); 780ba55f2f5SFrançois Tigeot return ret; 781e555d299SFrançois Tigeot } 782e555d299SFrançois Tigeot 783e555d299SFrançois Tigeot /** 784e555d299SFrançois Tigeot * i915_switch_context() - perform a GPU context switch. 785e555d299SFrançois Tigeot * @ring: ring for which we'll execute the context switch 786ba55f2f5SFrançois Tigeot * @to: the context to switch to 787e555d299SFrançois Tigeot * 788e555d299SFrançois Tigeot * The context life cycle is simple. The context refcount is incremented and 789e555d299SFrançois Tigeot * decremented by 1 and create and destroy. If the context is in use by the GPU, 7901b13d190SFrançois Tigeot * it will have a refcount > 1. This allows us to destroy the context abstract 791e555d299SFrançois Tigeot * object while letting the normal object tracking destroy the backing BO. 7921b13d190SFrançois Tigeot * 7931b13d190SFrançois Tigeot * This function should not be used in execlists mode. Instead the context is 7941b13d190SFrançois Tigeot * switched by writing to the ELSP and requests keep a reference to their 7951b13d190SFrançois Tigeot * context. 796e555d299SFrançois Tigeot */ 797ba55f2f5SFrançois Tigeot int i915_switch_context(struct intel_engine_cs *ring, 798ba55f2f5SFrançois Tigeot struct intel_context *to) 799e555d299SFrançois Tigeot { 800e555d299SFrançois Tigeot struct drm_i915_private *dev_priv = ring->dev->dev_private; 801e555d299SFrançois Tigeot 8021b13d190SFrançois Tigeot WARN_ON(i915.enable_execlists); 8035d0b1887SFrançois Tigeot WARN_ON(!mutex_is_locked(&dev_priv->dev->struct_mutex)); 8045d0b1887SFrançois Tigeot 80524edb884SFrançois Tigeot if (to->legacy_hw_ctx.rcs_state == NULL) { /* We have the fake context */ 806ba55f2f5SFrançois Tigeot if (to != ring->last_context) { 807ba55f2f5SFrançois Tigeot i915_gem_context_reference(to); 808ba55f2f5SFrançois Tigeot if (ring->last_context) 809ba55f2f5SFrançois Tigeot i915_gem_context_unreference(ring->last_context); 810ba55f2f5SFrançois Tigeot ring->last_context = to; 811ba55f2f5SFrançois Tigeot } 812e555d299SFrançois Tigeot return 0; 813e555d299SFrançois Tigeot } 814e555d299SFrançois Tigeot 815ba55f2f5SFrançois Tigeot return do_switch(ring, to); 816ba55f2f5SFrançois Tigeot } 817ba55f2f5SFrançois Tigeot 8181b13d190SFrançois Tigeot static bool contexts_enabled(struct drm_device *dev) 819ba55f2f5SFrançois Tigeot { 8201b13d190SFrançois Tigeot return i915.enable_execlists || to_i915(dev)->hw_context_size; 821e555d299SFrançois Tigeot } 822e555d299SFrançois Tigeot 823e555d299SFrançois Tigeot int i915_gem_context_create_ioctl(struct drm_device *dev, void *data, 824e555d299SFrançois Tigeot struct drm_file *file) 825e555d299SFrançois Tigeot { 826e555d299SFrançois Tigeot struct drm_i915_gem_context_create *args = data; 827e555d299SFrançois Tigeot struct drm_i915_file_private *file_priv = file->driver_priv; 828ba55f2f5SFrançois Tigeot struct intel_context *ctx; 829e555d299SFrançois Tigeot int ret; 830e555d299SFrançois Tigeot 8311b13d190SFrançois Tigeot if (!contexts_enabled(dev)) 832e555d299SFrançois Tigeot return -ENODEV; 833e555d299SFrançois Tigeot 834e555d299SFrançois Tigeot ret = i915_mutex_lock_interruptible(dev); 835e555d299SFrançois Tigeot if (ret) 836e555d299SFrançois Tigeot return ret; 837e555d299SFrançois Tigeot 8381b13d190SFrançois Tigeot ctx = i915_gem_create_context(dev, file_priv); 839a2fdbec6SFrançois Tigeot mutex_unlock(&dev->struct_mutex); 840e555d299SFrançois Tigeot if (IS_ERR(ctx)) 841e555d299SFrançois Tigeot return PTR_ERR(ctx); 842e555d299SFrançois Tigeot 84324edb884SFrançois Tigeot args->ctx_id = ctx->user_handle; 844e555d299SFrançois Tigeot DRM_DEBUG_DRIVER("HW context %d created\n", args->ctx_id); 845e555d299SFrançois Tigeot 846e555d299SFrançois Tigeot return 0; 847e555d299SFrançois Tigeot } 848e555d299SFrançois Tigeot 849e555d299SFrançois Tigeot int i915_gem_context_destroy_ioctl(struct drm_device *dev, void *data, 850e555d299SFrançois Tigeot struct drm_file *file) 851e555d299SFrançois Tigeot { 852e555d299SFrançois Tigeot struct drm_i915_gem_context_destroy *args = data; 853e555d299SFrançois Tigeot struct drm_i915_file_private *file_priv = file->driver_priv; 854ba55f2f5SFrançois Tigeot struct intel_context *ctx; 855e555d299SFrançois Tigeot int ret; 856e555d299SFrançois Tigeot 85724edb884SFrançois Tigeot if (args->ctx_id == DEFAULT_CONTEXT_HANDLE) 858ba55f2f5SFrançois Tigeot return -ENOENT; 859e555d299SFrançois Tigeot 860e555d299SFrançois Tigeot ret = i915_mutex_lock_interruptible(dev); 861e555d299SFrançois Tigeot if (ret) 862e555d299SFrançois Tigeot return ret; 863e555d299SFrançois Tigeot 864e555d299SFrançois Tigeot ctx = i915_gem_context_get(file_priv, args->ctx_id); 865ba55f2f5SFrançois Tigeot if (IS_ERR(ctx)) { 866a2fdbec6SFrançois Tigeot mutex_unlock(&dev->struct_mutex); 867ba55f2f5SFrançois Tigeot return PTR_ERR(ctx); 868e555d299SFrançois Tigeot } 869e555d299SFrançois Tigeot 87024edb884SFrançois Tigeot idr_remove(&ctx->file_priv->context_idr, ctx->user_handle); 8715d0b1887SFrançois Tigeot i915_gem_context_unreference(ctx); 872a2fdbec6SFrançois Tigeot mutex_unlock(&dev->struct_mutex); 873e555d299SFrançois Tigeot 874e555d299SFrançois Tigeot DRM_DEBUG_DRIVER("HW context %d destroyed\n", args->ctx_id); 875e555d299SFrançois Tigeot return 0; 876e555d299SFrançois Tigeot } 8772c9916cdSFrançois Tigeot 8782c9916cdSFrançois Tigeot int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data, 8792c9916cdSFrançois Tigeot struct drm_file *file) 8802c9916cdSFrançois Tigeot { 8812c9916cdSFrançois Tigeot struct drm_i915_file_private *file_priv = file->driver_priv; 8822c9916cdSFrançois Tigeot struct drm_i915_gem_context_param *args = data; 8832c9916cdSFrançois Tigeot struct intel_context *ctx; 8842c9916cdSFrançois Tigeot int ret; 8852c9916cdSFrançois Tigeot 8862c9916cdSFrançois Tigeot ret = i915_mutex_lock_interruptible(dev); 8872c9916cdSFrançois Tigeot if (ret) 8882c9916cdSFrançois Tigeot return ret; 8892c9916cdSFrançois Tigeot 8902c9916cdSFrançois Tigeot ctx = i915_gem_context_get(file_priv, args->ctx_id); 8912c9916cdSFrançois Tigeot if (IS_ERR(ctx)) { 8922c9916cdSFrançois Tigeot mutex_unlock(&dev->struct_mutex); 8932c9916cdSFrançois Tigeot return PTR_ERR(ctx); 8942c9916cdSFrançois Tigeot } 8952c9916cdSFrançois Tigeot 8962c9916cdSFrançois Tigeot args->size = 0; 8972c9916cdSFrançois Tigeot switch (args->param) { 8982c9916cdSFrançois Tigeot case I915_CONTEXT_PARAM_BAN_PERIOD: 8992c9916cdSFrançois Tigeot args->value = ctx->hang_stats.ban_period_seconds; 9002c9916cdSFrançois Tigeot break; 9012c9916cdSFrançois Tigeot default: 9022c9916cdSFrançois Tigeot ret = -EINVAL; 9032c9916cdSFrançois Tigeot break; 9042c9916cdSFrançois Tigeot } 9052c9916cdSFrançois Tigeot mutex_unlock(&dev->struct_mutex); 9062c9916cdSFrançois Tigeot 9072c9916cdSFrançois Tigeot return ret; 9082c9916cdSFrançois Tigeot } 9092c9916cdSFrançois Tigeot 9102c9916cdSFrançois Tigeot int i915_gem_context_setparam_ioctl(struct drm_device *dev, void *data, 9112c9916cdSFrançois Tigeot struct drm_file *file) 9122c9916cdSFrançois Tigeot { 9132c9916cdSFrançois Tigeot struct drm_i915_file_private *file_priv = file->driver_priv; 9142c9916cdSFrançois Tigeot struct drm_i915_gem_context_param *args = data; 9152c9916cdSFrançois Tigeot struct intel_context *ctx; 9162c9916cdSFrançois Tigeot int ret; 9172c9916cdSFrançois Tigeot 9182c9916cdSFrançois Tigeot ret = i915_mutex_lock_interruptible(dev); 9192c9916cdSFrançois Tigeot if (ret) 9202c9916cdSFrançois Tigeot return ret; 9212c9916cdSFrançois Tigeot 9222c9916cdSFrançois Tigeot ctx = i915_gem_context_get(file_priv, args->ctx_id); 9232c9916cdSFrançois Tigeot if (IS_ERR(ctx)) { 9242c9916cdSFrançois Tigeot mutex_unlock(&dev->struct_mutex); 9252c9916cdSFrançois Tigeot return PTR_ERR(ctx); 9262c9916cdSFrançois Tigeot } 9272c9916cdSFrançois Tigeot 9282c9916cdSFrançois Tigeot switch (args->param) { 9292c9916cdSFrançois Tigeot case I915_CONTEXT_PARAM_BAN_PERIOD: 9302c9916cdSFrançois Tigeot if (args->size) 9312c9916cdSFrançois Tigeot ret = -EINVAL; 9322c9916cdSFrançois Tigeot else if (args->value < ctx->hang_stats.ban_period_seconds && 9332c9916cdSFrançois Tigeot !capable(CAP_SYS_ADMIN)) 9342c9916cdSFrançois Tigeot ret = -EPERM; 9352c9916cdSFrançois Tigeot else 9362c9916cdSFrançois Tigeot ctx->hang_stats.ban_period_seconds = args->value; 9372c9916cdSFrançois Tigeot break; 9382c9916cdSFrançois Tigeot default: 9392c9916cdSFrançois Tigeot ret = -EINVAL; 9402c9916cdSFrançois Tigeot break; 9412c9916cdSFrançois Tigeot } 9422c9916cdSFrançois Tigeot mutex_unlock(&dev->struct_mutex); 9432c9916cdSFrançois Tigeot 9442c9916cdSFrançois Tigeot return ret; 9452c9916cdSFrançois Tigeot } 946