1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641
|
GitLab Experiment Platform
=================
<img alt="experiment" src="/uploads/60990b2dbf4c0406bbf8b7f998de2dea/experiment.png" align="right" width="40%">
**A comprehensive experimentation platform for building data-driven organizations**
GitLab Experiment is an enterprise-grade experimentation framework that enables teams to validate hypotheses, optimize
user experiences, and make evidence-based product decisions at scale.
Built on years of production experience at GitLab, this platform provides the foundation for a mature experimentation
culture across your entire organization.
At GitLab, we run experiments as A/B/n tests and review the data they generate.
From that data, we determine the best performing code path and promote it as the new default, or revert back to the
original code path.
You can read our [Experiment Guide](https://docs.gitlab.com/ee/development/experiment_guide/) to learn how we use this
gem internally at GitLab.
[[_TOC_]]
## Why GitLab Experiment?
### Built for Scale and Reliability
- **Production-tested** at GitLab scale with millions of users
- **Type-safe and testable** with comprehensive RSpec support
- **Framework agnostic** - works with Rails, Sinatra, or standalone Ruby applications
- **Redis-backed caching** for consistent user experiences across sessions
- **GDPR-compliant** with anonymous tracking and built-in DNT (Do Not Track) support
### Designed for Teams
- **Developer-friendly DSL** that reads like natural language
- **Organized experiment classes** that live alongside your application code
- **Built-in generators** for rapid experiment creation
- **Comprehensive testing support** with custom RSpec matchers
- **Rails integration** with automatic middleware mounting and view helpers
### Enterprise-Ready Features
- **Flexible rollout strategies** (percent-based, random, round-robin, or custom)
- **Advanced segmentation** to target specific user populations
- **Multi-variant testing** (A/B/n) with unlimited experimental paths
- **Progressive rollouts** with the `only_assigned` feature
- **Context migrations** for evolving experiments without losing data
- **Integration-ready** with existing feature flag systems (Flipper, Unleash, etc.)
<br clear="all">
## Use Cases Across Your Organization
### Product Teams: Optimize User Experiences
- **Onboarding flows**: Test different signup sequences to maximize activation
- **UI/UX changes**: Validate design decisions with real user behavior data
- **Feature rollouts**: Gradually release features to measure impact before full deployment
- **Pricing experiments**: Test different pricing strategies and messaging
### Growth Teams: Drive Conversion
- **Call-to-action optimization**: Test button colors, copy, and placement
- **Landing page variations**: Experiment with different value propositions
- **Email campaigns**: A/B test subject lines and content
- **Trial conversion**: Optimize paths from trial to paid subscriptions
### Engineering Teams: Safe Deployments
- **Performance optimizations**: Compare algorithm implementations under real load
- **Architecture changes**: Validate new code paths before full migration
- **API versions**: Run multiple API implementations side-by-side
- **Infrastructure experiments**: Test different caching or database strategies
### Data Science Teams: Recommendation Systems
- **Algorithm tuning**: Compare ML model variations in production
- **Personalization**: Test different recommendation strategies
- **Search ranking**: Optimize search results based on user engagement
- **Content discovery**: Experiment with different content surfaces
## Platform Capabilities
### Core Experimentation Features
**Multi-variant Testing (A/B/n)**
Run experiments with any number of variants, not just A/B tests. Perfect for testing multiple approaches simultaneously.
**Smart Segmentation**
Route specific user populations to predetermined variants based on business rules, ensuring consistent experiences for
targeted groups.
**Progressive Rollouts**
Use the `only_assigned` feature to show experimental features only to users already in the experiment, enabling
controlled expansion.
**Context Flexibility**
Experiments can be sticky to users, projects, organizations, or any combination - enabling complex scenarios beyond
user-centric testing.
**Anonymous Tracking**
Built-in privacy protection with anonymous context keys, automatic cookie migration, and GDPR compliance.
**Automatic Assignment Tracking**
Every experiment automatically tracks an `:assignment` event when it runs - zero configuration required. Combined with
the anonymous context key, this gives your data team a complete picture of variant distribution and funnel entry without
any additional instrumentation.
**Client-Side Integration**
Seamlessly extend experiments to the frontend with JavaScript integration, enabling full-stack experimentation.
**Inline and Class-Based APIs**
Define experiments inline with blocks for quick iterations, or use dedicated experiment classes for complex logic - or
combine both. Class-based experiments define default behaviors that can be overridden inline at any call site, giving
teams the flexibility to start simple and evolve without rewriting:
```ruby
experiment(:pill_color, actor: current_user) do |e|
e.control { '<strong>control</strong>' }
end
```
**Context as a Design Framework**
Context is the most important design decision in any experiment. It determines stickiness, cache behavior, and how
events are correlated. Choose per-user context for personalization experiments, per-project for infrastructure tests,
per-group for organizational rollouts, or combine dimensions for precision targeting. This flexibility enables
experimentation strategies that go far beyond simple user-centric A/B tests.
**Decoupled Assignment with Publish**
Surface experiment assignments to the client layer without executing server-side behavior using `publish`. This enables
frontend-only experiments, pre-assignment in `before_action` hooks, and scenarios where variant data needs to be
available across the stack without triggering server-side code paths:
```ruby
before_action -> { experiment(:pill_color, actor: current_user).publish }, only: [:show]
```
### Integration Ecosystem
**Feature Flag Integration**
Connect with existing feature flag systems like Flipper or Unleash through custom rollout strategies.
**Analytics Integration**
Flexible tracking callbacks integrate with any analytics platform - Snowplow, Amplitude, Mixpanel, or your data
warehouse.
**Monitoring and Observability**
Built-in logging and callbacks for integration with APM tools and monitoring systems.
**Email and Markdown**
Special middleware for tracking experiments in email links and static content.
### Terminology
When we discuss the platform, we use specific terms that are worth understanding:
- **experiment** - Any deviation of code paths we want to test
- **context** - Identifies a consistent experience (user, project, session, etc.)
- **control** - The default or "original" code path
- **candidate** - One experimental code path (used in A/B tests)
- **variant(s)** - Multiple experimental paths (used in A/B/n tests)
- **behaviors** - All possible code paths (control + all variants)
- **rollout strategy** - Logic determining if an experiment is enabled and how variants are assigned
- **segmentation** - Rules for routing specific contexts to predetermined variants
- **exclusion** - Rules for keeping contexts out of experiments entirely
<br clear="all">
## Quick Start: From Zero to Experiment in 5 Minutes
### Installation
Add the gem to your Gemfile and then `bundle install`.
```ruby
gem 'gitlab-experiment'
```
If you're using Rails, install the initializer which provides basic configuration, documentation, and the base
experiment class:
```shell
$ rails generate gitlab:experiment:install
```
### Your First Experiment
Let's create a real-world experiment to optimize a call-to-action button.
This example demonstrates the power of the platform while remaining practical.
#### Step 1: Generate the experiment
**Hypothesis**: A more prominent call-to-action button will increase conversion rates
```shell
$ rails generate gitlab:experiment signup_cta
```
This creates `app/experiments/signup_cta_experiment.rb` with helpful inline documentation.
#### Step 2: Define your experiment class
```ruby
class SignupCtaExperiment < ApplicationExperiment
# Define the control (current experience)
control { 'btn-default' }
# Define the candidate (new experience to test)
candidate { 'btn-primary btn-lg' }
# Optional: Exclude certain users
exclude :existing_customers
# Optional: Track when the experiment runs
after_run :log_experiment_assignment
private
def existing_customers
context.actor&.subscribed?
end
def log_experiment_assignment
Rails.logger.info("User assigned to #{assigned.name} variant")
end
end
```
#### Step 3: Use the experiment in your view
```haml
-# The experiment is sticky to the current user
-# Anonymous users get a cookie-based assignment
%button{ class: experiment(:signup_cta, actor: current_user).run }
Start Free Trial
```
#### Step 4: Track engagement
```ruby
# In your controller, track when users click the button
def create_trial
experiment(:signup_cta, actor: current_user).track(:signup_completed)
# ... rest of your trial creation logic
end
```
**That's it!** Your experiment is now running, collecting data, and providing consistent experiences to your users.
## Real-World Examples
### Example 1: Onboarding Flow Optimization
**Business Context**: Product team wants to increase new user activation by testing different onboarding sequences.
```ruby
class OnboardingFlowExperiment < ApplicationExperiment
# Three different onboarding approaches
control { :standard_tour } # Current 5-step tour
variant(:quick) { :quick_start } # Streamlined 2-step flow
variant(:video) { :video_guide } # Video-based walkthrough
# Only show to new users who haven't completed onboarding
exclude :has_completed_onboarding
# Segment enterprise trial users to the standard tour
segment :enterprise_trial?, variant: :control
private
def has_completed_onboarding
context.actor&.onboarding_completed_at.present?
end
def enterprise_trial?
context.actor&.trial_type == 'enterprise'
end
end
# In your onboarding controller
def show
flow = experiment(:onboarding_flow, actor: current_user).run
render_onboarding_flow(flow)
end
# Track completion
def complete
experiment(:onboarding_flow, actor: current_user).track(:completed)
# ... mark user as onboarded
end
```
### Example 2: Pricing Page Experiment
**Business Context**: Growth team wants to test whether showing annual savings increases annual plan selection.
```ruby
class PricingDisplayExperiment < ApplicationExperiment
control { :monthly_default }
candidate { :annual_default_with_savings }
# Only run for unauthenticated visitors
exclude :authenticated_user
private
def authenticated_user
context.actor.present?
end
end
# In your pricing view
- pricing_variant = experiment(:pricing_display, actor: current_user).run
= render "pricing/#{pricing_variant}"
# Track plan selections
def select_plan
experiment(:pricing_display, actor: current_user).track(:plan_selected,
value: params[:plan_type] == 'annual' ? 1 : 0
)
end
```
### Example 3: Algorithm Performance Test
**Business Context**: Engineering team wants to compare a new search algorithm's performance before full rollout.
```ruby
class SearchAlgorithmExperiment < ApplicationExperiment
control { SearchEngine::Legacy }
candidate { SearchEngine::Neural }
# Only run for 25% of searches
default_rollout :percent, distribution: { control: 75, candidate: 25 }
# Exclude searches from API (higher SLA requirements)
exclude :api_request
# Track performance metrics
after_run :record_search_timing
private
def api_request
context.request&.path&.start_with?('/api/')
end
def record_search_timing
# Custom metrics tracking
end
end
# In your search service
def search(query)
algorithm = experiment(:search_algorithm,
actor: current_user,
project: current_project
).run
results = algorithm.search(query)
experiment(:search_algorithm,
actor: current_user,
project: current_project
).track(:search_completed, value: results.count)
results
end
```
### Example 4: Progressive Feature Rollout
**Business Context**: Launching a new AI-assisted code review feature, want to expand gradually to manage load and gather feedback.
```ruby
class AiCodeReviewExperiment < ApplicationExperiment
control { false } # Feature disabled
candidate { true } # Feature enabled
# Start with 5% rollout
default_rollout :percent, distribution: { control: 95, candidate: 5 }
# Segment beta program users to always get the feature
segment :beta_user?, variant: :candidate
# Exclude free tier (computational cost consideration)
exclude :free_tier_user
private
def beta_user?
context.actor&.beta_features_enabled?
end
def free_tier_user
context.actor&.subscription_tier == 'free'
end
end
# In your merge request view
- if experiment(:ai_code_review, actor: current_user, project: @project).run
.ai-code-review-panel
= render 'ai_suggestions'
# Track usage
def apply_ai_suggestion
experiment(:ai_code_review, actor: current_user, project: @project)
.track(:suggestion_applied)
end
```
## Platform Integration Patterns
### Integration with Feature Flags (Flipper)
Many organizations already use feature flag systems. GitLab Experiment integrates seamlessly:
```ruby
module Gitlab::Experiment::Rollout
class Flipper < Percent
def enabled?
::Flipper.enabled?(experiment.name, experiment_actor)
end
def experiment_actor
Struct.new(:flipper_id).new("Experiment;#{id}")
end
end
end
# Configure globally
Gitlab::Experiment.configure do |config|
config.default_rollout = Gitlab::Experiment::Rollout::Flipper.new
end
# Now Flipper controls your experiments
Flipper.enable_percentage_of_actors(:signup_cta, 50)
```
### Integration with Analytics Platforms
Connect experiments to your analytics stack:
```ruby
Gitlab::Experiment.configure do |config|
config.tracking_behavior = lambda do |event_name, **data|
# Snowplow
SnowplowTracker.track_struct_event(
category: 'experiment',
action: event_name,
property: data[:experiment],
context: [{ schema: 'experiment_context', data: data }]
)
# Amplitude (example)
Amplitude.track(
user_id: data[:key], # Anonymous experiment key
event_type: "experiment_#{event_name}",
event_properties: data
)
# Custom data warehouse
DataWarehouse.log_experiment_event(event_name, data)
end
end
```
### Multi-Application Consistency
Share experiment assignments across multiple applications:
```ruby
# Shared Redis cache
Gitlab::Experiment.configure do |config|
config.cache = Gitlab::Experiment::Cache::RedisHashStore.new(
Redis.new(url: ENV['REDIS_URL']),
expires_in: 30.days
)
end
# Now experiments stay consistent across your web app, API, and background jobs
```
## Advanced Features
### Multi-Variant (A/B/n) Testing
Test multiple variations simultaneously to find the optimal solution:
```ruby
class NotificationStyleExperiment < ApplicationExperiment
# Test three different notification approaches
control { :banner } # Current: banner at top
variant(:toast) { :toast } # Toast notification
variant(:modal) { :modal } # Modal dialog
# Distribute traffic evenly across all three
default_rollout :percent,
distribution: { control: 34, toast: 33, modal: 33 }
# Exclude mobile users (different UI constraints)
exclude :mobile_user
# Segment power users to toast (less intrusive)
segment :power_user?, variant: :toast
private
def mobile_user
context.request&.user_agent&.match?(/Mobile/)
end
def power_user?
context.actor&.actions_count > 1000
end
end
```
### Exclusion Rules
Keep contexts out of experiments entirely based on business rules:
```ruby
class FeatureExperiment < ApplicationExperiment
# Exclude existing customers (only test on prospects)
exclude :existing_customer
# Exclude during maintenance windows
exclude -> { context.project&.under_maintenance? }
# Exclude if feature is explicitly disabled
exclude :feature_disabled
private
def existing_customer
context.actor&.subscribed?
end
def feature_disabled
!FeatureFlag.enabled?(:allow_experiment, context.actor)
end
end
```
**Key behaviors:**
- Excluded users always receive the control experience
- No tracking events are recorded for excluded users
- Exclusion rules are evaluated in order, first match wins
- Exclusions improve performance by exiting early
**Inline exclusion** is also supported:
```ruby
experiment(:feature, actor: current_user) do |e|
e.exclude! unless can?(current_user, :manage, project)
e.control { 'standard' }
e.candidate { 'enhanced' }
end
```
Note: Although tracking calls will be ignored on all exclusions, you may want to check exclusion yourself in expensive
custom logic by calling the `should_track?` or `excluded?` methods.
Note: When using exclusion rules it's important to understand that the control assignment is cached, which improves
future experiment run performance but can be a gotcha around caching.
Note: Exclusion rules aren't the best way to determine if an experiment is enabled. There's an `enabled?` method that
can be overridden to have a high-level way of determining if an experiment should be running and tracking at all. This
`enabled?` check should be as efficient as possible because it's the first early opt out path an experiment can
implement. This can be seen in [How Experiments Work (Technical)](#how-experiments-work-technical).
### Segmentation Rules
Route specific populations to predetermined variants:
```ruby
class NewFeatureExperiment < ApplicationExperiment
# Route VIP customers to the new feature
segment :vip_customer?, variant: :candidate
# Route enterprise trial users to the enhanced experience
segment :enterprise_trial?, variant: :candidate
# Route users from specific campaigns to specific variants
segment(variant: :candidate) { context.campaign == 'product_launch_2024' }
private
def vip_customer?
context.actor&.account_value > 100_000
end
def enterprise_trial?
context.actor&.trial_tier == 'enterprise'
end
end
```
**Key behaviors:**
- Segmentation rules are evaluated in order, first match wins
- Segmented assignments are cached for consistency
- Perfect for gradually expanding successful experiments
- Enables sophisticated population targeting
### Lifecycle Callbacks
Execute custom logic at different stages of experiment execution:
```ruby
class PerformanceExperiment < ApplicationExperiment
# Run before the variant is determined
before_run :log_experiment_start
# Run after the variant is executed
after_run :record_timing_metrics, :notify_analytics_team
# Wrap the entire execution
around_run do |experiment, block|
start_time = Time.current
result = block.call
duration = Time.current - start_time
Metrics.record("experiment.#{experiment.name}.duration", duration)
result
end
private
def log_experiment_start
Rails.logger.info("Starting experiment: #{name}")
end
def record_timing_metrics
# Custom timing logic
end
def notify_analytics_team
# Send to analytics platform
end
end
```
**Use cases for callbacks:**
- Performance monitoring and APM integration
- Custom analytics and data warehouse updates
- Experiment-specific logging and debugging
- Integration with external systems
### Progressive Rollout with `only_assigned`
Control experiment expansion by only showing features to users already assigned to the experiment.
This is critical for managing blast radius and controlled rollouts:
**The Challenge**: You launch an experiment to 10% of new signups. Later, you want to show experimental features on other pages, but only to users already in the experiment - not expand to 10% of all users across the platform.
**The Solution**: Use `only_assigned: true`
```ruby
# Step 1: Assign users during signup (10% of new signups)
class RegistrationsController < ApplicationController
def create
user = User.create!(user_params)
# This assigns 10% to candidate, 90% to control
experiment(:onboarding_v2, actor: user).publish
redirect_to dashboard_path
end
end
# Step 2: Later, show features only to those already assigned
class DashboardController < ApplicationController
def show
# This will NOT expand the experiment to 10% of all users
# Only users assigned in Step 1 will see the experimental UI
@show_new_features = experiment(:onboarding_v2,
actor: current_user,
only_assigned: true
).assigned.name == 'candidate'
end
end
# Step 3: Show UI conditionally across the app
- if experiment(:onboarding_v2, actor: current_user, only_assigned: true).run
.new-onboarding-features
= render 'enhanced_dashboard'
```
**Behavior with `only_assigned: true`:**
- ✅ If user already assigned → returns their cached variant
- ✅ If user not assigned → returns control, no tracking
- ✅ Experiment reach stays controlled
- ✅ Perfect for multi-page experimental experiences
**Real-world use cases:**
- **Post-signup experiences**: Assign at signup, show features throughout the app
- **Gradual feature expansion**: Roll out to 5%, then add more touchpoints without expanding population
- **Cleanup phases**: Maintain experience for existing participants while preventing new assignments
- **A/B testing with multiple surfaces**: Test a hypothesis across multiple pages without assignment leakage
### Custom Rollout Strategies
The platform supports multiple rollout strategies out of the box, and you can create custom strategies for your specific
needs.
**Built-in strategies:**
- [`Percent`](lib/gitlab/experiment/rollout/percent.rb) - Consistent percentage-based assignment (default, recommended)
- [`Random`](lib/gitlab/experiment/rollout/random.rb) - True random assignment (useful for load testing)
- [`RoundRobin`](lib/gitlab/experiment/rollout/round_robin.rb) - Cycle through variants (requires caching)
- [`Base`](lib/gitlab/experiment/rollout.rb) - Useful for building custom rollout strategies
```ruby
class LoadTestExperiment < ApplicationExperiment
# Randomly test two different caching strategies
default_rollout :random
control { CacheStrategy::Redis }
candidate { CacheStrategy::Memcached }
end
class GradualRolloutExperiment < ApplicationExperiment
# Start with 5% in the new experience
default_rollout :percent,
distribution: { control: 95, candidate: 5 }
end
```
See the [Advanced: Custom Rollout Strategies](#advanced-custom-rollout-strategies) section for building your own
integration with feature flag systems.
## Organizational Best Practices
### Experiment Lifecycle Management
**1. Hypothesis Formation**
```ruby
# Document your hypothesis in the experiment class
class CheckoutFlowExperiment < ApplicationExperiment
# Hypothesis: Reducing checkout steps from 3 to 2 will increase completion rate
# Success metric: 5% increase in checkout completion
# Target: All free trial users
# Duration: 2 weeks
# Owner: @growth-team
control { :three_step_checkout }
candidate { :two_step_checkout }
exclude :existing_customer
end
```
**2. Gradual Rollout**
```ruby
# Week 1: 5% rollout
default_rollout :percent, distribution: { control: 95, candidate: 5 }
# Week 2: Increase to 25% if metrics look good
default_rollout :percent, distribution: { control: 75, candidate: 25 }
# Week 3: Full rollout if successful
default_rollout :percent, distribution: { control: 0, candidate: 100 }
```
**3. Monitoring and Alerting**
```ruby
class CriticalPathExperiment < ApplicationExperiment
after_run :monitor_performance
after_run :alert_on_errors
private
def monitor_performance
Metrics.increment("experiment.#{name}.#{assigned.name}")
end
def alert_on_errors
if context.error_rate > threshold
PagerDuty.alert("High error rate in #{name}")
end
end
end
```
**4. Experiment Cleanup**
```ruby
# When experiment is conclusive, clean up:
# 1. Remove the experiment code
# 2. Promote winner to production
# 3. Document learnings
# Before cleanup, archive results:
experiment(:checkout_flow).publish
# Export data for historical analysis
```
### Team Collaboration Patterns
**Product + Engineering + Data Science**
```ruby
class CollaborativeExperiment < ApplicationExperiment
# Product defines the hypothesis and variants
control { :current_flow }
candidate { :new_flow }
# Engineering defines segmentation and rollout
segment :beta_users, variant: :candidate
default_rollout :percent, distribution: { control: 90, candidate: 10 }
# Data science defines tracking and metrics
after_run :track_funnel_step
def track_funnel_step
Analytics.track_experiment_step(
experiment: name,
variant: assigned.name,
funnel_position: context.step,
user_segment: context.actor&.segment
)
end
end
```
### Testing Strategy
Write tests for your experiments using the included RSpec matchers:
```ruby
RSpec.describe CheckoutFlowExperiment do
describe 'segmentation' do
it 'routes existing customers to control' do
customer = create(:user, :with_subscription)
expect(experiment(:checkout_flow)).to exclude(actor: customer)
end
it 'routes enterprise trials to candidate' do
trial = create(:user, :enterprise_trial)
expect(experiment(:checkout_flow))
.to segment(actor: trial).into(:candidate)
end
end
describe 'tracking' do
it 'tracks checkout completion' do
expect(experiment(:checkout_flow)).to track(:completed)
.on_next_instance
CheckoutService.complete(user: user)
end
end
end
```
### Naming Conventions
Establish clear naming conventions for your organization:
```ruby
# Good: Descriptive experiment names
class OnboardingFlowV2Experiment < ApplicationExperiment; end
class PricingPageAnnualFocusExperiment < ApplicationExperiment; end
class SearchAlgorithmNeuralExperiment < ApplicationExperiment; end
# Avoid: Vague names
class TestExperiment < ApplicationExperiment; end # What are we testing?
class ExperimentOne < ApplicationExperiment; end # No context
```
## Technical Reference
### How Experiments Work (Technical)
Understanding the experiment resolution flow helps you design better experiments and debug issues:
**Decision tree for variant assignment:**
```mermaid
graph TD
GP[General Pool/Population] --> Running?[Rollout Enabled?]
Running? -->|Yes| Forced?[Forced Assignment?]
Running? -->|No| Excluded[Control / No Tracking]
Forced? -->|Yes / Cached| ForcedVariant[Forced Variant]
Forced? -->|No| Cached?[Cached? / Pre-segmented?]
Cached? -->|No| Excluded?
Cached? -->|Yes| Cached[Cached Value]
Excluded? -->|Yes / Cached| Excluded
Excluded? -->|No| Segmented?
Segmented? -->|Yes / Cached| VariantA
Segmented? -->|No| Rollout[Rollout Resolve]
Rollout --> Control
Rollout -->|Cached| VariantA
Rollout -->|Cached| VariantB
Rollout -->|Cached| VariantN
class ForcedVariant,VariantA,VariantB,VariantN included
class Control,Excluded excluded
class Cached cached
```
**Key points:**
1. Rollout must be enabled for any variant assignment (including forced assignment)
2. Forced assignment takes priority over cache/exclusion/segmentation (via `glex_force` query parameter)
3. Cache provides consistency across calls
4. Segmentation takes priority over rollout
5. `only_assigned: true` exits early if no cache hit
### Experiment Context and Stickiness
Internally, experiments have what's referred to as the context "key" that represents the unique and anonymous id of a
given context. This allows us to assign the same variant between different calls to the experiment, is used in caching
and can be used in event data downstream. This context "key" is how an experiment remains "sticky" to a given context,
and is an important aspect to understand.
**Context defines stickiness** - experiments remain consistent by generating an anonymous key from the context:
```ruby
# Sticky to user - same user gets same variant everywhere
experiment(:feature, actor: current_user)
# Sticky to project - all users on a project get the same experience
experiment(:feature, project: project)
# Sticky to user+project - same user gets same variant per project
experiment(:feature, actor: current_user, project: project)
# Custom stickiness - explicitly define what creates consistency
experiment(:feature, actor: current_user, project: project, sticky_to: project)
```
**The `actor` keyword has special behavior:**
- Anonymous users → temporary cookie-based assignment
- Upon sign-in → cookie migrates to user ID automatically
- Enables consistent experience across anonymous → authenticated journey
### Using Experiments Beyond Views
By default, `Gitlab::Experiment` injects itself into the controller, view, and mailer layers. This exposes the
`experiment` method application wide in those layers. Some experiments may extend outside of those layers however, so
you may want to include it elsewhere. For instance in an irb session or the rails console, or in all your service
objects, background jobs, or similar:
```ruby
# In all background jobs
class ApplicationJob < ActiveJob::Base
include Gitlab::Experiment::Dsl
end
# In service objects
class ApplicationService
include Gitlab::Experiment::Dsl
end
# In a console session
include Gitlab::Experiment::Dsl
experiment(:feature, actor: User.first).run
```
### Manual Variant Assignment
<details>
<summary>You can also specify the variant manually...</summary>
Generally, defining segmentation rules is a better way to approach routing into specific variants, but it's possible to
explicitly specify the variant when running an experiment.
Caching: It's important to understand what this might do to your data during rollout, so use this with careful
consideration. Any time a specific variant is assigned manually, or through segmentation (including `:control`) it will
be cached for that context. That means that if you manually assign `:control`, that context will never be moved out of
the control unless you do it programmatically elsewhere.
```ruby
include Gitlab::Experiment::Dsl
# Assign the candidate manually.
ex = experiment(:pill_color, :red, actor: User.first) # => #<PillColorExperiment:0x..>
# Run the experiment -- returning the result.
ex.run # => "red"
# If caching is enabled this will remain sticky between calls.
experiment(:pill_color, actor: User.first).run # => "red"
```
</details>
### Forced Variant Assignment (QA/UAT)
For testing and validation purposes, you can force a specific variant assignment via a URL query parameter. This is
useful for QA testing in staging or production environments where you need to verify a specific variant's behavior.
**Configuration:**
Forced assignment is disabled by default. Enable it in your initializer:
```ruby
Gitlab::Experiment.configure do |config|
config.allow_forced_assignment = true
end
```
**Usage:**
Append the `glex_force` query parameter to any URL with the format `experiment_name:variant_name`:
```
https://your-app.com/signup?glex_force=myapp_signup_cta:candidate
```
The forced variant is written to the cache (Redis) on the same request, making it permanent for that context. The
query parameter only needs to be provided once -- after that, the variant is persisted in the cache like any normal
assignment.
#### Anonymous user (nil actor) -- initial assignment
This is the primary use case for QA testing signup flows and landing pages. The user is not signed in, so the actor is
nil and the experiment uses a cookie-based context key.
1. Anonymous user visits `https://your-app.com/signup?glex_force=signup_cta:candidate`
2. The forced variant `:candidate` is written to Redis under the cookie-based context key
3. The user signs in -- the standard cookie migration carries the forced variant to their real identity
4. All future requests use `:candidate` from Redis, permanently
This means a QA tester can force a variant before signup and have it follow the user through the entire
anonymous-to-authenticated journey.
#### Signed-in user -- initial assignment
When a signed-in user hasn't been assigned a variant yet, the force param assigns and caches it immediately:
```
https://your-app.com/dashboard?glex_force=new_feature:candidate
```
The variant is cached under the user's context key on this request. No further query parameter is needed.
#### Signed-in user -- re-assignment (overwriting an existing variant)
If a user was previously assigned `:control` (by the rollout strategy or a prior force), the force param overwrites
the cached value:
```
https://your-app.com/dashboard?glex_force=new_feature:candidate
```
The existing `:control` assignment in Redis is replaced with `:candidate`. This is useful when QA needs to switch a
user between variants without clearing the cache manually.
#### Disabled experiments and feature flags
Forced assignment requires the experiment to be enabled. If the experiment is disabled (as determined by the rollout
strategy's `enabled?` method), the `glex_force` parameter is ignored and normal resolution applies (which will assign
control).
This is intentional -- a disabled experiment may be disabled for valid reasons (incomplete implementation, known issues,
compliance constraints, etc.) and force assignment should not provide a way to bypass that decision. To use forced
assignment, ensure the experiment is enabled first through your rollout strategy.
**Important notes:**
- The experiment name in the parameter must match the full experiment name (including any configured `name_prefix`).
- If the variant name doesn't match a registered behavior, the forced assignment is ignored and normal variant resolution
proceeds (typically resulting in the control variant).
- Forced assignment does not override a variant that was already set via the constructor or an explicit `assigned()`
call within the same request.
- This feature requires a `request` object with `params` to be available in the experiment context.
> [!NOTE]
> Because forcing the variant ignores the exclusion/segmentation process it will cover up those types of errors so if your experiment relies on these types of logic this testing method should be avoided.
### Experiment Signature
The best way to understand the details of an experiment is through its signature. An example signature can be retrieved
by calling the `signature` method, and looks like the following:
```ruby
experiment(:example).signature # => {:variant=>"control", :experiment=>"example", :key=>"4d7aee..."}
```
An experiment signature is useful when tracking events and when using experiments on the client layer. The signature can
also contain the optional `migration_keys`, and `excluded` properties.
### Return Value
By default the return value of calling `experiment` is a `Gitlab::Experiment` instance, or whatever class the
experiment is resolved to, which likely inherits from `Gitlab::Experiment`. In simple cases you may want only the
results of running the experiment though. You can call `run` within the block to get the return value of the assigned
variant.
```ruby
# Normally an experiment instance.
experiment(:example) do |e|
e.control { 'A' }
e.candidate { 'B' }
end # => #<Gitlab::Experiment:0x...>
# But calling `run` causes the return value to be the result.
experiment(:example) do |e|
e.control { 'A' }
e.candidate { 'B' }
e.run
end # => 'A'
```
### Context migrations
There are times when we need to change context while an experiment is running.
We make this possible by passing the migration data to the experiment.
Take for instance, that you might be using `version: 1` in your context currently.
To migrate this to `version: 2`, provide the portion of the context you wish to change using a `migrated_with` option.
In providing the context migration data, we can resolve an experience and its events all the way back.
This can also help in keeping our cache relevant.
```ruby
# First implementation.
experiment(:example, actor: current_user, version: 1)
# Migrate just the `:version` portion.
experiment(:example, actor: current_user, version: 2, migrated_with: { version: 1 })
```
You can add or remove context by providing a `migrated_from` option.
This approach expects a full context replacement -- i.e. what it was before you added or removed the new context key.
If you wanted to introduce a `version` to your context, provide the full previous context.
```ruby
# First implementation.
experiment(:example, actor: current_user)
# Migrate the full context of `{ actor: current_user }` to `{ actor: current_user, version: 1 }`.
experiment(:example, actor: current_user, version: 1, migrated_from: { actor: current_user })
```
When you migrate context, this information is included in the signature of the experiment.
This can be used downstream in event handling and reporting to resolve a series of events back to a single experience,
while also keeping everything anonymous.
An example of our experiment signature when we migrate would include the `migration_keys` property:
```ruby
ex = experiment(:example, version: 1)
ex.signature # => {:key=>"20d69a...", ...}
ex = experiment(:example, version: 2, migrated_from: { version: 1 })
ex.signature # => {:key=>"9e9d93...", :migration_keys=>["20d69a..."], ...}
```
### Cookies and the actor keyword
We use cookies to auto migrate an unknown value into a known value, often in the case of the current user.
The implementation of this uses the same concept outlined above with context migrations, but will happen automatically
for you if you use the `actor` keyword.
When you use the `actor: current_user` pattern in your context, the nil case is handled by setting a special cookie for
the experiment and then deleting the cookie, and migrating the context key to the one generated from the user when
they've signed in.
This cookie is a temporary, randomized uuid and isn't associated with a user.
When we can finally provide an actor, the context is auto migrated from the cookie to that actor.
```ruby
# The actor key is not present, so no cookie is set.
experiment(:example, project: project)
# The actor key is present but nil, so the cookie is set and used.
experiment(:example, actor: nil, project: project)
# The actor key is present and isn't nil, so the cookie value (if found) is
# migrated forward and the cookie is deleted.
experiment(:example, actor: current_user, project: project)
```
Note: The cookie is deleted when resolved, but can be assigned again if the `actor` is ever nil again.
A good example of this scenario would be on a sign in page.
When a potential user arrives, they would never be known, so a cookie would be set for them, and then resolved/removed
as soon as they signed in.
This process would repeat each time they arrived while not being signed in and can complicate reporting unless it's
handled well in the data layers.
Note: To read and write cookies, we provide the `request` from within the controller and views.
The cookie migration will happen automatically if the experiment is within those layers.
You'll need to provide the `request` as an option to the experiment if it's outside of the controller and views.
```ruby
experiment(:example, actor: current_user, request: request)
```
Note: For edge cases, you can pass the cookie through by assigning it yourself -- e.g. `actor:
request.cookie_jar.signed['example_id']`.
The cookie name is the full experiment name (including any configured prefix) with `_id` appended -- e.g.
`pill_color_id` for the `PillColorExperiment`.
### Client layer
Experiments that have been run (or published) during the request lifecycle can be pushed into to the client layer by
injecting the published experiments into javascript in a layout or view using something like:
```haml
= javascript_tag(nonce: content_security_policy_nonce) do
window.experiments = #{raw ApplicationExperiment.published_experiments.to_json};
```
The `window.experiments` object can then be used in your client implementation to determine experimental behavior at
that layer as well.
For instance, we can now access the `window.experiments.pill_color` object to get the variant that was assigned, if the
context was excluded, and to use the context key in our client side events.
## Adoption Guide for Organizations
### Phase 1: Foundation (Week 1-2)
1. **Install and configure** the gem
2. **Set up analytics integration** in the initializer
3. **Create a base experiment class** for your organization
4. **Run your first small experiment** (low-risk, high-visibility)
### Phase 2: Team Enablement (Week 3-4)
1. **Document your organization's patterns** (naming, testing, rollout)
2. **Train teams** on experiment lifecycle
3. **Establish experiment review process** (hypothesis → implementation → analysis)
4. **Run 2-3 experiments** across different teams
### Phase 3: Scale (Month 2+)
1. **Integrate with feature flag system** (if applicable)
2. **Build dashboards** for experiment monitoring
3. **Establish data review cadence** (weekly experiment reviews)
4. **Scale to 5-10 concurrent experiments**
### Common Pitfalls to Avoid
**❌ Don't: Run experiments without clear success metrics**
```ruby
class VagueExperiment < ApplicationExperiment
# What are we trying to learn?
control { :old_way }
candidate { :new_way }
end
```
**✅ Do: Document hypothesis and success criteria**
```ruby
class CheckoutOptimizationExperiment < ApplicationExperiment
# Hypothesis: Showing trust badges increases checkout completion
# Success Metric: 5% increase in completion rate
# Target: Free trial users
# Duration: 2 weeks
control { :without_badges }
candidate { :with_trust_badges }
end
```
**❌ Don't: Let experiments run indefinitely**
- Set time bounds for every experiment
- Review results at planned intervals
- Make a decision: promote winner, revert, or iterate
**✅ Do: Build experiment cleanup into your process**
- Schedule experiment review meetings
- Archive experiment results
- Clean up experiment code after conclusion
## Platform Configuration
The platform requires initial configuration to integrate with your analytics and infrastructure.
**Basic configuration** (in `config/initializers/gitlab_experiment.rb`):
```ruby
Gitlab::Experiment.configure do |config|
# How experiment events are tracked
config.tracking_behavior = lambda do |event_name, **data|
YourAnalytics.track(
user_id: data[:key], # Anonymous experiment key
event: "experiment_#{event_name}",
properties: data
)
end
# How experiments are cached (recommended: Redis)
config.cache = Gitlab::Experiment::Cache::RedisHashStore.new(
Redis.new(url: ENV['REDIS_URL']),
expires_in: 30.days
)
# Optional: Prefix all experiment names
config.name_prefix = 'mycompany'
# Optional: Default rollout strategy
config.default_rollout = Gitlab::Experiment::Rollout::Percent.new
end
```
See the [complete initializer template](lib/generators/gitlab/experiment/install/templates/initializer.rb.tt) for all
configuration options.
### Advanced: Caching Configuration
**Why caching matters:**
- Ensures consistent user experience across sessions
- Improves performance (skip rollout logic after first assignment)
- Required for `only_assigned` functionality
- Enables context migrations
**Cache options:**
```ruby
# Option 1: Use Rails cache (simple)
Gitlab::Experiment.configure do |config|
config.cache = Rails.cache
end
# Option 2: Use Redis directly (recommended for scale)
Gitlab::Experiment.configure do |config|
config.cache = Gitlab::Experiment::Cache::RedisHashStore.new(
Redis.new(url: ENV['REDIS_URL']),
expires_in: 30.days
)
end
# Option 3: No caching (deterministic rollout strategies only)
config.cache = nil
```
The gem includes the [`RedisHashStore`](lib/gitlab/experiment/cache/redis_hash_store.rb) cache store, which is
documented in its implementation.
**Important:** Caching changes how rollout strategies behave. Once cached, subsequent calls return the cached value regardless of rollout strategy changes.
### Advanced: Custom Rollout Strategies
Build custom integrations with your existing infrastructure:
**Example: Flipper Integration**
```ruby
# We put it in this module namespace so we can get easy resolution when
# using `default_rollout :flipper` in our usage later.
module Gitlab::Experiment::Rollout
class Flipper < Percent
def enabled?
::Flipper.enabled?(experiment.name, self)
end
def flipper_id
"Experiment;#{id}"
end
end
end
```
So, Flipper needs something that responds to `flipper_id`, and since our experiment "id" (which is also our context key)
is unique and consistent, we're going to give that to Flipper to manage things like percentage of actors etc.
You might want to consider something more complex here if you're using things that can be flipper actors in your
experiment context.
Anyway, now you can use your custom `Flipper` rollout strategy by instantiating it in configuration:
```ruby
Gitlab::Experiment.configure do |config|
config.default_rollout = Gitlab::Experiment::Rollout::Flipper.new
end
```
Or if you don't want to make that change globally, you can use it in specific experiment classes:
```ruby
class PillColorExperiment < Gitlab::Experiment # OR ApplicationExperiment
# ...registered behaviors
default_rollout :flipper,
distribution: { control: 26, red: 37, blue: 37 } # optionally specify distribution
end
```
Now, enabling or disabling the Flipper feature flag will control if the experiment is enabled or not.
If the experiment is enabled, as determined by our custom rollout strategy, the standard resolution logic will be
executed, and a variant (or control) will be assigned.
```ruby
experiment(:pill_color).enabled? # => false
experiment(:pill_color).assigned.name # => "control"
# Now we can enable the feature flag to enable the experiment.
Flipper.enable(:pill_color) # => true
experiment(:pill_color).enabled? # => true
experiment(:pill_color).assigned.name # => "red"
```
### Middleware
There are times when you'll need to do link tracking in email templates, or markdown content -- or other places you
won't be able to implement tracking.
For these cases a middleware layer that can redirect to a given URL while also tracking that the URL was visited has
been provided.
In Rails this middleware is mounted automatically, with a base path of what's been configured for `mount_at`.
If this path is nil, the middleware won't be mounted at all.
```ruby
Gitlab::Experiment.configure do |config|
config.mount_at = '/experiment'
# Only redirect on permitted domains.
config.redirect_url_validator = ->(url) { (url = URI.parse(url)) && url.host == 'gitlab.com' }
end
```
Once configured to be mounted, the experiment tracking redirect URLs can be generated using the Rails route helpers.
```ruby
ex = experiment(:example)
# Generating the path/url using the path and url helper.
experiment_redirect_path(ex, url: 'https//gitlab.com/docs') # => "/experiment/example:20d69a...?https//gitlab.com/docs"
experiment_redirect_url(ex, url: 'https//gitlab.com/docs') # => "https://gitlab.com/experiment/example:20d69a...?https//gitlab.com/docs"
# Manually generating a url is a bit less clean, but is possible.
"#{Gitlab::Experiment::Configuration.mount_at}/#{ex.to_param}?https//docs.gitlab.com/"
```
## Testing (rspec support)
This gem comes with some rspec helpers and custom matchers.
To get the experiment specific rspec support, require the rspec support file:
```ruby
require 'gitlab/experiment/rspec'
```
Any file in `spec/experiments` path will automatically get the experiment specific support, but it can also be included
in other specs by adding the `:experiment` label:
```ruby
describe MyExampleController do
context "with my experiment", :experiment do
# experiment helpers and matchers will be available here.
end
end
```
### Stub helpers
You can stub experiment variant resolution using the `stub_experiments` helper. The helper supports multiple formats for
flexibility:
**Simple hash format:**
```ruby
it "stubs experiments using hash format" do
stub_experiments(pill_color: :red)
experiment(:pill_color) do |e|
expect(e).to be_enabled
expect(e.assigned.name).to eq('red')
end
end
```
**Hash format with options:**
```ruby
it "stubs experiments with assigned option" do
stub_experiments(pill_color: { variant: :red, assigned: true })
experiment(:pill_color) do |e|
expect(e).to be_enabled
expect(e.assigned.name).to eq('red')
end
end
```
**Mixed formats (symbols and hashes together):**
```ruby
it "stubs multiple experiments with mixed formats" do
stub_experiments(
pill_color: :red,
hippy: { variant: :free_love, assigned: true },
yuppie: :financial_success
)
expect(experiment(:pill_color).assigned.name).to eq(:red)
expect(experiment(:hippy).assigned.name).to eq(:free_love)
expect(experiment(:yuppie).assigned.name).to eq(:financial_success)
end
```
**Boolean true (allows rollout strategy to assign):**
```ruby
it "stubs experiments while allowing the rollout strategy to assign the variant" do
stub_experiments(pill_color: true) # only stubs enabled?
experiment(:pill_color) do |e|
expect(e).to be_enabled
# expect(e.assigned.name).to eq([whatever the rollout strategy assigns])
end
end
```
#### Testing `only_assigned` behavior
When you use the `assigned: true` option in `stub_experiments`, the `find_variant` method is automatically stubbed
to return the specified variant. This allows you to test the `only_assigned` behavior:
```ruby
it "tests only_assigned behavior with a cached variant" do
stub_experiments(pill_color: { variant: :red, assigned: true })
experiment_instance = experiment(:pill_color, actor: user, only_assigned: true)
expect(experiment_instance).not_to be_excluded
expect(experiment_instance.run).to eq('red')
end
it "tests only_assigned behavior without a cached variant" do
stub_experiments(pill_color: :red)
experiment_instance = experiment(:pill_color, actor: user, only_assigned: true)
expect(experiment_instance).to be_excluded
expect(experiment_instance.run).to eq('red')
end
```
**Note:** The `assigned: true` option only works correctly when caching is disabled. When caching is enabled,
`find_variant` will attempt to read from the actual cache store rather than using the stub. In this case, you can
populate the cache naturally by running the experiment first to assign and cache a variant before testing with
`only_assigned: true`.
### Registered behaviors matcher
It's useful to test our registered behaviors, as well as their return values when we implement anything complex in them.
The `register_behavior` matcher is useful for this.
```ruby
it "tests our registered behaviors" do
expect(experiment(:pill_color)).to register_behavior(:control)
.with('grey') # with a default return value of "grey"
expect(experiment(:pill_color)).to register_behavior(:red)
expect(experiment(:pill_color)).to register_behavior(:blue)
end
```
### Exclusion and segmentation matchers
You can also easily test your experiment classes using the `exclude`, `segment` matchers.
```ruby
let(:excluded) { double(first_name: 'Richard', created_at: Time.current) }
let(:segmented) { double(first_name: 'Jeremy', created_at: 3.weeks.ago) }
it "tests the exclusion rules" do
expect(experiment(:pill_color)).to exclude(actor: excluded)
expect(experiment(:pill_color)).not_to exclude(actor: segmented)
end
it "tests the segmentation rules" do
expect(experiment(:pill_color)).to segment(actor: segmented)
.into(:red) # into a specific variant
expect(experiment(:pill_color)).not_to segment(actor: excluded)
end
```
### Tracking matcher
Tracking events is a major aspect of experimentation, and because of this we try to provide a flexible way to ensure
your tracking calls are covered.
```ruby
before do
stub_experiments(pill_color: true) # stub the experiment so tracking is permitted
end
it "tests that we track an event on a specific instance" do
expect(subject = experiment(:pill_color)).to track(:clicked)
subject.track(:clicked)
end
```
You can use the `on_next_instance` chain method to specify that the tracking call could happen on the next instance of
the experiment.
This can be useful if you're calling `experiment(:example).track` downstream and don't have access to that instance.
Here's a full example of the methods that can be chained onto the `track` matcher:
```ruby
it "tests that we track an event with specific details" do
expect(experiment(:pill_color)).to track(:clicked, value: 1, property: '_property_')
.on_next_instance # any time in the future
.with_context(foo: :bar) # with the expected context
.for(:red) # and assigned the correct variant
experiment(:pill_color, :red, foo: :bar).track(:clicked, value: 1, property: '_property_')
end
```
## Tracking, anonymity and GDPR
We generally try not to track things like user identifying values in our experimentation.
What we can and do track is the "experiment experience" (a.k.a. the context key).
We generate this key from the context passed to the experiment.
This allows creating funnels without exposing any user information.
This library attempts to be non-user-centric, in that a context can contain things like a user or a project.
If you only include a user, that user would get the same experience across every project they view.
If you only include the project, every user who views that project would get the same experience.
Each of these approaches could be desirable given the objectives of your experiment.
## Development
After cloning the repo, run `bundle install` to install dependencies.
## Running tests
The test suite requires Redis to be running.
[Install](https://redis.io/docs/latest/operate/oss_and_stack/install/archive/install-redis/) and start Redis
(`redis-server`) before running tests.
Once Redis is running, execute the tests:
`bundle exec rake`
You can also run `bundle exec pry` for an interactive prompt that will allow you to experiment.
## Contributing
Bug reports and merge requests are welcome on GitLab at https://gitlab.com/gitlab-org/ruby/gems/gitlab-experiment.
This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the
[Contributor Covenant](http://contributor-covenant.org) code of conduct.
Make sure to include a changelog entry in your commit message and read the [changelog entries
section](https://docs.gitlab.com/ee/development/changelog.html).
## Release process
Please refer to the [Release Process](docs/release_process.md).
## License
The gem is available as open source under the terms of the [MIT License](http://opensource.org/licenses/MIT).
## Code of conduct
Everyone interacting in the `Gitlab::Experiment` project’s codebases, issue trackers, chat rooms and mailing lists is
expected to follow the [code of conduct](CODE_OF_CONDUCT.md).
|